CN114864035A - Image report generation method, device, system, equipment and storage medium - Google Patents

Image report generation method, device, system, equipment and storage medium Download PDF

Info

Publication number
CN114864035A
CN114864035A CN202210495359.5A CN202210495359A CN114864035A CN 114864035 A CN114864035 A CN 114864035A CN 202210495359 A CN202210495359 A CN 202210495359A CN 114864035 A CN114864035 A CN 114864035A
Authority
CN
China
Prior art keywords
report
image
confirmed
image report
reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210495359.5A
Other languages
Chinese (zh)
Inventor
杜翔乾
王小明
张雪艳
马骏骑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hefei Yofo Medical Technology Co ltd
Original Assignee
Hefei Yofo Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hefei Yofo Medical Technology Co ltd filed Critical Hefei Yofo Medical Technology Co ltd
Priority to CN202210495359.5A priority Critical patent/CN114864035A/en
Publication of CN114864035A publication Critical patent/CN114864035A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Epidemiology (AREA)
  • Theoretical Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

The disclosure provides an image report generation method, device, system, equipment and storage medium. The image report generation method of the present disclosure includes: acquiring three-dimensional volume data, reconstruction geometric parameters and two-dimensional projection data of a first object from the CBCT; according to the three-dimensional volume data, the reconstruction geometric parameters and the two-dimensional projection data of the first object, executing identification and classification of the region of interest to obtain region of interest information of the first object; according to the region-of-interest information, the reconstruction geometric parameters and the two-dimensional projection data of the first object, performing local tangent plane reconstruction to obtain a first report mapping of the first object; and generating an image report to be confirmed of the first object, wherein the image report to be confirmed of the first object comprises a first report matching picture so as to show the image report to be confirmed of the first object to the first user, so that the first user can audit the image report to be confirmed of the first object. The present disclosure enables automatic generation of a visual report containing higher resolution images of a focal region.

Description

Image report generation method, device, system, equipment and storage medium
Technical Field
The present disclosure relates to an image report generation method, apparatus, system, device, and storage medium.
Background
In some application scenarios, a professional is required to manually intercept a key part in an image, and a report is generated after labeling, typesetting and the like, so that relevant personnel can consult to obtain key information in the image. Taking an image report of an oral department as an example, an oral radiologist needs to provide an image report based on Cone beam computed tomography (Cone beam CT, CBCT) three-dimensional data for a patient, and the physician needs to manually intercept slice images of a large number of lesion areas by means of tools such as film reading software, and the slice images are marked and typeset and introduced into the image report.
Disclosure of Invention
In order to solve at least one of the above technical problems, the present disclosure provides a video report generation method, device, system, apparatus, and storage medium capable of automatically generating a video report including a lesion area image.
A first aspect of the present disclosure provides an image report generation method, including:
acquiring three-dimensional volume data, reconstruction geometric parameters and two-dimensional projection data of a first object from the CBCT;
according to the three-dimensional volume data, the reconstruction geometric parameters and the two-dimensional projection data of the first object, executing identification and classification of the region of interest to obtain region of interest information of the first object;
according to the region-of-interest information of the first object, the reconstruction geometric parameters and the two-dimensional projection data, local tangent plane reconstruction is carried out, and a first report matching image of the first object is obtained;
and generating an image report to be confirmed of the first object, wherein the image report to be confirmed of the first object comprises the first report matching so as to show the image report to be confirmed of the first object to a first user, so that the first user can audit the image report to be confirmed of the first object.
A second aspect of the present disclosure provides an image report generation apparatus, including:
an acquisition unit for acquiring three-dimensional volume data, reconstruction geometry parameters and two-dimensional projection data of a first object from a CBCT;
the identification and classification unit is used for executing identification and classification of an interested region according to the three-dimensional volume data, the reconstruction geometric parameters and the two-dimensional projection data of the first object to obtain interested region information of the first object;
a local reconstruction unit, configured to perform local tangent plane reconstruction according to the region-of-interest information of the first object, the reconstruction geometric parameter, and the two-dimensional projection data, to obtain a first report mapping of the first object;
a report generating unit, configured to generate an image report to be confirmed of the first object, where the image report to be confirmed of the first object includes the first report matching;
and the image report auditing unit is used for providing the image report to be confirmed of the first object for a first user, so that the first user can audit the image report to be confirmed of the first object.
A third aspect of the present disclosure provides an electronic device, comprising:
a memory storing execution instructions; and
and the processor executes the execution instruction stored in the memory, so that the processor executes the image report generation method.
A fourth aspect of the present disclosure provides a readable storage medium, in which execution instructions are stored, and the execution instructions are executed by a processor to implement the image report generating method described above.
A fifth aspect of the present disclosure provides an image report generation system, including: a CBCT and a cloud server; the cloud server comprises a data storage unit, a report storage unit and the image report generation device; the data storage unit is used for storing three-dimensional volume data, reconstruction geometric parameters and two-dimensional projection data of a first object from the CBCT; the report storage unit is used for storing the image report to be confirmed, the confirmation image report and/or the visual three-dimensional image of the first object, which are obtained by the image report generating device.
The method and the device automatically identify and classify the region of interest which may belong to the focus or the suspected focus by using the original data (namely, two-dimensional projection data) scanned by the CBCT, the three-dimensional volume data obtained by CBCT reconstruction and the reconstruction geometric parameters, and perform local section images based on the information of the region of interest, can effectively improve the definition and the resolution of report matching in an image report, facilitate a first user (namely, a doctor) to diagnose the focus by referring to the image report, can release the doctor from the heavy medical image report writing and auditing work, improve the work efficiency of the doctor, and enable a second user (namely, the patient) to visually and clearly know the focus and the state of an illness.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the disclosure and together with the description serve to explain the principles of the disclosure.
Fig. 1 is a flowchart illustrating an image report generation method according to an embodiment of the present disclosure.
Fig. 2 is a schematic diagram of reconstruction geometry parameters of one embodiment of the present disclosure.
FIG. 3 is a schematic diagram of an artificial intelligence model and image report generation process according to an embodiment of the present disclosure.
Fig. 4 is a flowchart illustrating a specific implementation of an image report generation method according to an embodiment of the present disclosure.
Fig. 5 is a schematic view of an MPR interface before review by a physician according to an embodiment of the present disclosure.
Fig. 6 is a schematic view of an MPR interface after review by a physician according to an embodiment of the present disclosure.
Fig. 7 is a block diagram schematically illustrating a configuration of a video report generation device implemented by hardware using a processing system according to an embodiment of the present disclosure.
Fig. 8 is a block diagram schematically illustrating the configuration of an image report generation system according to an embodiment of the present disclosure.
Description of the reference numerals
700 video report generation device
702 acquisition unit
704 identify the taxon
706 local reconstruction unit
708 report generation unit
710 image report auditing unit
712 rendering unit
714 video report distribution unit
716 model training unit
800 bus
900 processor
1000 memory
1100 various other circuits
1200 cloud server
1300 CBCT or CBCT host
1400 cloud service adapter
1202 data storage unit
1204 report storage Unit
1208 local reconstruction interface
1206 data rendering interface
Detailed Description
The present disclosure will be described in further detail with reference to the drawings and embodiments. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not to be construed as limitations of the present disclosure. It should be further noted that, for the convenience of description, only the portions relevant to the present disclosure are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. Technical solutions of the present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Unless otherwise indicated, the illustrated exemplary embodiments/examples are to be understood as providing exemplary features of various details of some ways in which the technical concepts of the present disclosure may be practiced. Thus, unless otherwise indicated, the features of the various embodiments/examples may be additionally combined, separated, interchanged, and/or rearranged without departing from the technical concept of the present disclosure.
The use of cross-hatching and/or shading in the drawings is generally used to clarify the boundaries between adjacent components. As such, unless otherwise noted, the presence or absence of cross-hatching or shading does not convey or indicate any preference or requirement for a particular material, material property, size, proportion, commonality between the illustrated components and/or any other characteristic, attribute, property, etc., of a component. Further, in the drawings, the size and relative sizes of components may be exaggerated for clarity and/or descriptive purposes. While example embodiments may be practiced differently, the specific process sequence may be performed in a different order than that described. For example, two processes described consecutively may be performed substantially simultaneously or in reverse order to that described. In addition, like reference numerals denote like parts.
When an element is referred to as being "on" or "on," "connected to" or "coupled to" another element, it can be directly on, connected or coupled to the other element or intervening elements may be present. However, when an element is referred to as being "directly on," "directly connected to" or "directly coupled to" another element, there are no intervening elements present. For purposes of this disclosure, the term "connected" may refer to physically, electrically, etc., and may or may not have intermediate components.
The terminology used herein is for the purpose of describing particular embodiments and is not intended to be limiting. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. Furthermore, when the terms "comprises" and/or "comprising" and variations thereof are used in this specification, the presence of stated features, integers, steps, operations, elements, components and/or groups thereof are stated but does not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components and/or groups thereof. It is also noted that, as used herein, the terms "substantially," "about," and other similar terms are used as approximate terms and not as degree terms, and as such, are used to interpret inherent deviations in measured values, calculated values, and/or provided values that would be recognized by one of ordinary skill in the art.
In the related art, the electronic image reporting system of CBCT has a single function. The artificial intelligence model uses two-dimensional images (e.g., panoramic or lateral, etc.) for lesion identification and analysis to generate a corresponding medical image report. Moreover, the content of the image report is single, and a high-definition image for a focus part or a suspected focus part is not available, so that a doctor is difficult to identify whether the focus image in the image report is accurate or not, and the focus image in the image report is often required to be manually modified or reconstructed by the doctor, which is tedious, time-consuming and labor-consuming. Furthermore, DICOM front-end and cloud services need to be used for interpretation. Moreover, the artificial intelligence model for identifying and classifying the focus can not be self-iterated and self-learned, and the process of updating the artificial intelligence model is complicated and time-consuming. In view of this, the present disclosure provides the following image report generation method, apparatus, device, storage medium, and system.
The present disclosure may be applicable to various medical scenarios such as oral maxillofacial.
Hereinafter, a detailed description will be given of a specific embodiment of the present disclosure with reference to fig. 1 to 8.
Fig. 1 is a flow chart illustrating an image report generation method according to some embodiments of the present disclosure. Referring to fig. 1, the image report generation method S10 of the present disclosure may include:
step S12, acquiring three-dimensional volume data, reconstruction geometric parameters and two-dimensional projection data of the first object from the CBCT;
the first object is the projection object of CBCT. For example, but not limited to, the oral cavity, the skull, the ear, nose and throat, the tooth body, etc. may be the body part that requires medical imaging.
The CBCT host carries out three-dimensional reconstruction on the two-dimensional projection data obtained after digital projection for multiple times (for example, 180-360 times) around the first object according to the reconstruction geometric parameters to obtain three-dimensional data, and the three-dimensional data is rendered to form a visual three-dimensional image of the first object. In step S12, the unrendered three-dimensional volume data, the projection-acquired two-dimensional projection data, and the corresponding reconstruction geometry parameters may be directly or indirectly transmitted (e.g., through the cloud service adapter 1400) to the cloud server by the CBCT host.
Here, the two-dimensional projection data from the CBCT may include, but is not limited to, 360 ° two-dimensional projection data of the first object. The two-dimensional projection data may be, but is not limited to, a RAW image data format (RAW data format). The three-dimensional volumetric data may be, but is not limited to, Digital Imaging and Communications in Medicine (DICOM) data or other medical image format data.
The reconstruction geometric parameters are relevant geometric parameters of the CBCT scanning orbit, the reconstruction geometric parameters can describe the geometric relationship among a radiation source, a detector, a rotating shaft and an imaging area in the CBCT, and the CBCT host can execute three-dimensional reconstruction based on the reconstruction geometric parameters.
Fig. 2 shows the spatial geometry of the representation of the reconstruction geometry parameters. In some embodiments, referring to fig. 2, the reconstruction geometry parameters may include, but are not limited to, source-to-detector distance (SID), source-to-axis distance (SAD), detector horizontal offset (uOffset), detector vertical offset (vaffset), source horizontal offset (ysrcoff), source vertical offset (zsrcfset), and the like. The rectangular coordinate system xyz in fig. 2 is the world coordinate system of CBCT, and the pixel coordinate system uv is the pixel coordinate system of CBCT.
In some embodiments, the CBCT host may upload the three-dimensional volume data, the reconstruction geometric parameters, and the two-dimensional projection data of the first object to the cloud server in a lossless compression mode, a data desensitization mode, and the like, so that the privacy of a user may be protected without causing data distortion, and meanwhile, the data transmission bandwidth requirement may be reduced, the data utilization rate may be improved, and the hardware cost may be reduced.
Step S14, according to the three-dimensional volume data, the reconstruction geometric parameters and the two-dimensional projection data of the first object, executing the identification and classification of the interested region to obtain the interested region information of the first object;
the region of interest is a corresponding region of the automatically identified lesion site or suspected lesion site. In some embodiments, the region of interest information may include, but is not limited to, a location of the region of interest (e.g., a location of a center point of the region of interest in a predetermined world coordinate system, which may be a world coordinate system with an origin at a preset point of a human body part such as a skull, a face, etc.), a size (e.g., length, width, high-level parameters of the region of interest in the predetermined world coordinate system), a normal vector, a name (e.g., corresponding lesion name), an attribute (e.g., lesion category, lesion description), and the like. Here, the normal vector may indicate a tangent plane to which the region of interest corresponds.
In some embodiments, the identification and classification of the region of interest may be achieved through an artificial intelligence model. Illustratively, the artificial intelligence model can include a convolutional neural network and a multi-label classification network connected in sequence, the convolutional neural network being used to implement the identification of the region of interest, and the multi-label classification network being used to implement the classification of the region of interest. Here, the artificial intelligence model may also be implemented by other types of machine learning models, not limited to convolutional neural networks.
Fig. 3 shows a schematic process diagram for performing region of interest identification and classification by means of an artificial intelligence model. The reconstruction geometric parameters, the two-dimensional projection data and the three-dimensional volume data of the first object are processed by the convolutional neural network to obtain a plurality of visual features of the first object, and each visual feature may include: a location, a size, and a normal vector of a region of interest (e.g., a lesion site or suspected lesion site of the first subject). The multi-label classification network classifies visual features of the first object obtained by the convolutional neural network, that is, regions of interest, and generates label information for each region of interest, where the label information may include an identifier (e.g., a lesion name, a lesion number, etc.) and an attribute (e.g., a lesion category and a lesion description) of the region of interest.
In a specific application, the artificial intelligence model can be obtained by training a manually labeled sample, or by training a previously obtained confirmation image report and corresponding raw data (i.e., three-dimensional volume data, two-dimensional projection data, and reconstruction geometry parameters). For example, referring to fig. 3, the artificial intelligence model may be self-iteratively updated using the confirmation image report, so as to continuously improve the accuracy of the artificial intelligence model in the application of the artificial intelligence model, thereby improving the accuracy of the identification and classification of the lesion region.
Referring to the example of fig. 4 below, in some embodiments, the image report generation method of the present disclosure may further include:
and step S13, updating parameters of the artificial intelligence model according to the confirmed image report, the three-dimensional volume data, the reconstruction geometric parameters and the two-dimensional projection data of the first object so as to improve the precision of the artificial intelligence model. Therefore, self-learning and self-iteration of the artificial intelligence model can be achieved, and the stability and accuracy of the artificial intelligence model are continuously enhanced on the basis of achieving automatic generation of the image report.
In one embodiment, step S13 may include:
a1, forming a new sample of the artificial intelligence model by using the confirmation image report, the three-dimensional volume data, the reconstruction geometric parameters and the two-dimensional projection data of the first object, and updating the new sample into a sample library of the artificial intelligence model;
step a2, retraining the artificial intelligence model by using the updated sample library to update the parameters of the artificial intelligence model, thereby improving the accuracy of the artificial intelligence model in identifying and classifying the focus.
Here, the artificial intelligence model can be retrained based on the updated sample library by adopting a small sample learning method or other similar methods, so that the parameters of the artificial intelligence model can be updated, the precision of the artificial intelligence model is continuously improved, the operation complexity is low, the data size is small, and the hardware cost can be reduced.
Step S16, according to the region-of-interest information, the reconstruction geometric parameters and the two-dimensional projection data of the first object, local tangent plane reconstruction is executed to obtain a first report mapping of the first object;
in some embodiments, the process of local slice reconstruction may include: a first report mapping of the first object can be obtained by reconstructing a two-dimensional slice image of the slice backup according to the position (e.g., the center point position), the size, the normal vector and the two-dimensional projection data of the region of interest (i.e., the lesion region or the suspected lesion region) of the first object. Because the local two-dimensional image is reconstructed instead of the three-dimensional image, the resource loss during reconstruction can be greatly reduced, and meanwhile, the aim of rapidly and efficiently reconstructing the local area image in a high-definition manner can be fulfilled due to low resource loss.
In some embodiments, local slice reconstruction may be performed using the FDK filtered backprojection algorithm. That is, the two-dimensional image reconstruction may be performed by using the reconstruction geometry and the tangent plane local projection data and using the FDK filtered back projection algorithm, so as to obtain the first report mapping of the first object.
In some embodiments, the resolution of the reported atlas (i.e., the first reported atlas and the second reported atlas herein) obtained via local slice reconstruction is higher than the resolution of slice images obtained directly from three-dimensional volumetric data.
Generally, a visualized three-dimensional image is obtained by rendering three-dimensional volume data, and then a required two-dimensional slice image is obtained by directly slicing the three-dimensional image in a certain direction or position, so that the resolution of the obtained two-dimensional slice image is the same as that of the three-dimensional image. Because the reconstruction and rendering of the three-dimensional volume data consume resources and related equipment resources are limited, the resolution of the three-dimensional image is often not high, and the resolution of the related two-dimensional slice image is also not high, and the diagnosis of a doctor is often influenced due to insufficient definition of the two-dimensional slice image. In the disclosure, through local section reconstruction, high-definition section imaging of an unclear lesion part or a suspected lesion part can be realized, the resolution of a section image of the lesion part is improved, so that the resolution of the section image of the lesion part or the suspected lesion part is higher than that of a three-dimensional image, and the definition of a relevant image of the lesion in an image report can be effectively improved by using the section image as a matching image of the image report, so that a doctor (namely, a first user below) can conveniently diagnose the lesion, and a patient (namely, a second user below) can conveniently and more clearly and intuitively check relevant details of the lesion. Experiments prove that the resolution of the image of the focus part or the suspected focus part in the image report can be improved from 0.25mm to 0.125mm or 0.06mm through local section reconstruction.
Step S18, generating an image report to be confirmed of the first object, where the image report to be confirmed of the first object includes the first report match, so as to show the image report to be confirmed of the first object to the first user, so that the first user can review the image report to be confirmed of the first object.
In some embodiments, as an example of the image report generating system shown in fig. 8, after the cloud server generates the image report to be confirmed of the first object, the image report to be confirmed of the first object is pushed to the doctor (i.e., the first user) through the image report auditing unit, the doctor can access the image report auditing unit through an application program or a browser by using an electronic device such as a computer or a mobile terminal, and the image report to be confirmed of the first object is displayed to the doctor through the browser or an interface of the application program. Therefore, the doctor can review and revise the image report in real time.
In some embodiments, the generation and display of the visual report to be confirmed can also be realized by the same electronic device. The disclosure is not limited thereto.
In some embodiments, the region of interest information may also include information such as a category (i.e., lesion category) and an attribute (e.g., lesion description, lesion name, etc.) of the region of interest. Referring to fig. 4, the image report generation method of the present disclosure may further include: step S15, generating a first text description according to the type and attribute of the region of interest, and adding the first text description to the image report to be confirmed of the first object. Therefore, the automatically generated image report to be confirmed not only can contain the report matching picture, but also can contain corresponding text description (such as diagnosis explanation of a focus or a suspected focus), the content is rich, manual input of a doctor is not needed, and the operation of the doctor can be further simplified.
In some embodiments, the image report auditing unit may be implemented as a first Web Service (Web Service) deployed in the cloud server, the image report auditing unit may provide a first Web page to the first terminal, the image report to be confirmed is displayed through the first Web page, and the first Web page is viewable only by a first user having an image report auditing authority. The first user can use the first terminal to access the first webpage through the browser, and the image report can be reviewed and revised through the first webpage through operations such as login and authentication.
In some embodiments, the image report generation method of the present disclosure may further include: a multi-planar reconstruction (MPR) interface is provided that may be used for the first user and/or the second user to present a visualized three-dimensional image and/or a multi-planar slice image corresponding to the visualized three-dimensional image.
In some embodiments, the image report review unit may have a multi-planar reconstruction (MPR) interface that may be used to present a visualized three-dimensional image, an axial slice image, a sagittal slice image, and a coronal slice image of the first subject. The first user can also use the first terminal to access the first webpage through the browser, view the first webpage through operations such as login and authentication, jump to the MPR interface by clicking an MPR interface button on the first webpage or other modes, diagnose by viewing a visual three-dimensional image, an axial plane slice image, a sagittal plane slice image and a coronal plane slice image of the first object, and revise the mapping in the image report through selection operations on the MPR interface.
Here, axial plane section image, sagittal plane section image and coronal plane section image can be directly sliced from the visual three-dimensional image, and thus, axial plane section image, sagittal plane section image and coronal plane section image can be directly obtained from the three-dimensional image slice, so that equipment loss and equipment performance requirements can be reduced, and operation requirements of doctors for real-time dragging and checking and the like can be met without blockage.
Step S11, in response to the confirmation message generated by the confirmation operation of the first user and corresponding to the to-be-confirmed image report of the first object, marking the to-be-confirmed image report of the first object as the confirmation image report of the first object, so as to show the confirmation image report of the first object to the second user. Therefore, the doctor can automatically generate the confirmation image report by clicking a button and other operations, the operation of the doctor is further simplified, and meanwhile, the confirmation image report can be pushed to the patient on line on the premise of ensuring the accuracy and reliability of the image report content, so that the doctor-patient communication process is effectively simplified, and the doctor-patient communication efficiency is improved.
In some embodiments, information such as a lesion position, a size, and a normal vector may be recorded in the mapping of the report in the confirmation image report and/or the image report to be confirmed, that is, corresponding region-of-interest information is recorded in the mapping of the first report in the confirmation image report and/or the image report to be confirmed, and/or corresponding selected region information is recorded in the mapping of the second report in the confirmation image report and/or the image report to be confirmed. Therefore, the patient or the doctor can jump to the display interface of the report matching picture by clicking the report matching picture in the confirmed image report, so that the doctor can conveniently and quickly review the image report, and the patient can conveniently and quickly check the state of an illness.
Referring to fig. 4, the video report generating method S40 may further include the following steps in addition to the steps S12 to S13:
step S42, rendering the three-dimensional volume data of the first object to obtain a visualized three-dimensional image of the first object, so as to show the visualized three-dimensional image of the first object to the first user, so that the first user can review the to-be-confirmed image report of the first object by combining the visualized three-dimensional image of the first object.
Because the three-dimensional reconstruction consumes resources and the CBCT has three-dimensional data (namely DICOM data) of the three-dimensional reconstruction, the visualized three-dimensional image of the first object is obtained by rendering the DICOM data uploaded by the CBCT directly, the three-dimensional reconstruction of the first object is not required to be repeatedly executed, the resource consumption can be reduced, the data utilization rate is improved, and the execution efficiency is improved.
Therefore, a doctor can conveniently and quickly position the focus position by referring to the visual three-dimensional image, and the image report to be confirmed is subjected to operations such as modification and confirmation, so that the diagnosis efficiency can be further improved, the doctor operation is simplified, and the diagnosis accuracy can be improved.
The execution sequence of step S42 and the aforementioned steps S14 to S13 is not limited. That is, step S42 may be executed in synchronization with, before, or after the foregoing step S14 to step S13.
Referring to fig. 4, the image report generating method S40 may further include the following steps:
and step S44, local tangent plane reconstruction is carried out according to the selected area information, the reconstruction geometric parameters and the two-dimensional projection data of the first object, and a second report matching of the first object is obtained, wherein the selected area information of the first object is generated in response to the matching modification operation and/or the matching reconstruction operation of the first user.
Specifically, a doctor (i.e., a first user) may select a slice to be reconstructed and a local region of the slice on the MPR interface, and information of the selected region of the first object may be obtained by detecting a selection operation of the doctor on the MPR interface. For example, the selected region information of the first object may be acquired by the first terminal by detecting a selection operation of a doctor on the MPR interface and transmitted to the cloud server through the image report auditing unit.
In some embodiments, the selected area information may include, but is not limited to, a selected location, an area size, and a normal vector. Here, the selected position may be a position of a point selected by the doctor in the MPR interface or a center point of the region in a predetermined world coordinate system (for example, a world coordinate system with a preset point in a human body part such as a skull and a face as an origin), the size of the region may include a length, a width, and a height parameter set by the doctor in the MPR interface or a length, a width, and a height parameter configured by default in advance, and the normal vector is a normal vector of a section in which the point selected by the doctor in the MPR interface is located.
Step S46, adjusting the to-be-confirmed image report of the first object according to the second report matching map, so that the to-be-confirmed image report of the first object includes the second report matching map;
adjusting the visual report to be confirmed of the first object may include, but is not limited to:
1) directly replacing the first report match selected by the first user in the image report to be confirmed of the first object by using the second report match so as to correct errors in the first report match;
2) and adding the second report matching picture into the image report to be confirmed of the first object, so that the image report to be confirmed of the first object is more comprehensive and clear, and the condition of the focus is conveniently and clearly shown through the second report matching picture and the first report matching picture.
Therefore, when a doctor reviews and revises the image report, if some images are unclear or inaccurate in position to affect diagnosis, or some images with focus are omitted from the image report, local sectional images with higher resolution and definition can be obtained by performing local sectional reconstruction on a certain region or some regions in the manner of steps S44 to S46, and the images with errors possibly existing in the image report to be confirmed are replaced or added to the image report to be confirmed, so that the image report with higher quality is output.
Fig. 5 shows a schematic view of the MPR interface before review by the physician. The doctor can access and log in a first webpage through a browser, and view the visualized three-dimensional image, the axial plane slice image, the sagittal plane slice image and the coronal plane slice image of the tooth through the MPR interface of the first webpage, if the doctor finds that the tooth focus part in the report matching image of the image report is not clear, a proper dental focus area can be selected by operating the cross positioning line at the upper left corner of the MPR interface, the cloud server can automatically reconstruct a local section of the dental focus area selected by the doctor after the doctor selects the dental focus area, a report image of an image report to be confirmed after the doctor reviews and revises the dental focus area more accurately, meanwhile, the MPR interface can also show the cross section of the dental lesion more clearly, and fig. 6 shows the MPR interface after the examination by the doctor, compared with fig. 5, the cross section in the MPR interface of fig. 6 can show the tooth part more accurately and clearly. It can be seen that the doctor can not only view the visualized three-dimensional image, axial plane slice image, sagittal plane slice image and coronal plane slice image of the first object in the MPR interface, but also can complete the modification of the report mapping by reselecting the lesion region or the section through a simple operation such as operating a cross-hair positioning line in the MPR interface.
Referring to fig. 4, the image report generating method S40 may further include the following steps:
step S48, adjusting the to-be-confirmed image report of the first object according to the second textual description of the first object, so that the to-be-confirmed image report of the first object includes the second textual description; wherein the second textual description of the first object is generated in response to a textual editing operation by the first user.
Specifically, a doctor (i.e., the first user) may input or edit a text description of the report matching on the image report auditing interface, and the second text description of the first object may be obtained by detecting a text editing operation of the doctor on the image report auditing interface. For example, the first terminal may acquire the second text description of the first object by detecting a text editing operation of a doctor on the image report auditing interface and transmit the second text description to the cloud server through the image report auditing unit.
In some embodiments, the second textual description may include, but is not limited to, information such as a diagnostic description of the lesion, a category of the lesion, and the like.
In some embodiments, adjusting the visual report to be confirmed of the first object according to the second textual description of the first object may include, but is not limited to:
1) directly replacing the first text description selected by the first user in the image report to be confirmed of the first object by using the second text description so as to correct errors in the first text description;
2) and adding the second text description into the image report to be confirmed of the first object, so that the image report to be confirmed of the first object is more comprehensive and clear, and the condition of the focus can be more clearly explained through the second text description and the first text description.
Therefore, when the physician reviews and revises the image report, if the text description of some images is unclear or inaccurate, or the text description of some images is omitted from the image report, the text description in the image report can be modified in the manner of step S48, so as to output a higher quality image report.
Referring to fig. 4, the image report generating method S40 may further include the following steps:
and step S41, responding to the viewing operation of the second user, and displaying the confirmation image report and/or the visual three-dimensional image of the first object to the second user.
In some embodiments, as an example of the image report generating system shown in fig. 8, after the cloud server generates the confirmation image report of the first object, the image report distributing unit provides the confirmation image report of the first object to the patient (i.e., the second user), the patient can access the image report distributing unit by using an electronic device such as a computer or a mobile terminal via a tool such as an application or a browser, the patient performs a viewing operation in an interface of the application or the browser, the application or the browser sends a request to the image distributing system of the cloud server, the image distributing system of the cloud server can push the confirmation image report of the first object to the browser or the interface of the application, and the patient can view the confirmation image report of the first object on the interface of the browser or the application. Therefore, the patient can conveniently view the image report on various electronic devices in real time, and the method is convenient and quick. In addition, the checking operation and display of the confirmation image report can also be realized by other manners, and the disclosure is not limited thereto.
In some embodiments, the video report distribution unit may be implemented as a second Web service deployed in the cloud server, and the video report distribution unit may provide a second Web page to the second terminal, and confirm that the video report is presented to the first user with the viewing right through the second Web page. For example, the second user may use the second terminal to access the second webpage through a browser or an application program, and may view the second webpage after logging in, authenticating, and the like, where the second webpage supports operations such as film printing, image report browsing, and the like. The second terminal may be, but is not limited to, a portable electronic device, a mobile device, or other various types of terminals.
In some embodiments, the visual report distribution unit may also have an MPR interface for presenting a visualized three-dimensional image, an axial slice image, a sagittal slice image, and a coronal slice image of the first object. After the second user logs in the second webpage by using the second terminal, the user can jump to the MPR interface by clicking an MPR interface button on the second webpage or in other modes so as to view the visual three-dimensional image, the axial plane slice image, the sagittal plane slice image and the coronal plane slice image of the first object.
At present, the DICOM data format is still a standard medical image data format, but the data format is complex, special software is needed for analysis and viewing, the universality is not high for patients, most cloud service systems have backed up hospital data and serve doctors as main functions, the DICOM is only supported in the data format, and common patients need to browse and view medical image reports very tedious. In the present disclosure, operations such as generation of a video report and rendering of a visual three-dimensional image are decoupled from a viewing operation of the video report, so that various mobile devices can view the video report and related images through an application program or a browser, and the visual three-dimensional image is rendered at a server side, so that the images related to a lesion can be smoothly previewed by using a mobile device or a portable electronic device (e.g., a laptop computer) with poor GPU performance. In addition, under the high transmission rate and low delay of the 5G technology, the browsing experience of the mobile device will be indistinguishable from that of the stationary electronic device. Therefore, the image report and the visual three-dimensional image are provided for the patient through the image report distribution unit, and the patient can conveniently check the medical image report.
Fig. 7 is a block diagram schematically illustrating a configuration of a video report generation device implemented by hardware using a processing system according to an embodiment of the present disclosure.
The apparatus may include corresponding means for performing each or several of the steps of the flowcharts described above. Thus, each step or several steps in the above-described flow charts may be performed by a respective module, and the apparatus may comprise one or more of these modules. The modules may be one or more hardware modules specifically configured to perform the respective steps, or implemented by a processor configured to perform the respective steps, or stored within a computer-readable medium for implementation by a processor, or by some combination.
The hardware architecture may be implemented with a bus architecture. The bus architecture may include any number of interconnecting buses and bridges depending on the specific application of the hardware and the overall design constraints. The bus 800 couples various circuits including the one or more processors 900, memory 1000, and/or hardware modules together. The bus 800 may also connect various other circuits 1100, such as peripherals, voltage regulators, power management circuits, external antennas, and the like.
The bus 800 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one connection line is shown, but no single bus or type of bus is shown.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present disclosure includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the implementations of the present disclosure. The processor performs the various methods and processes described above. For example, method embodiments in the present disclosure may be implemented as a software program tangibly embodied in a machine-readable medium, such as a memory. In some embodiments, some or all of the software program may be loaded and/or installed via memory and/or a communication interface. When the software program is loaded into memory and executed by a processor, one or more steps of the method described above may be performed. Alternatively, in other embodiments, the processor may be configured to perform one of the methods described above by any other suitable means (e.g., by means of firmware).
The logic and/or steps represented in the flowcharts or otherwise described herein may be embodied in any readable storage medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
For the purposes of this description, a "readable storage medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the readable storage medium include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable read-only memory (CDROM). In addition, the readable storage medium may even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in the memory.
It should be understood that portions of the present disclosure may be implemented in hardware, software, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps of the method implementing the above embodiments may be implemented by hardware that is instructed to be associated with a program, which may be stored in a readable storage medium, and which, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present disclosure may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a separate product, may also be stored in a readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
As shown in fig. 7, an image report generation apparatus 700 according to an embodiment of the present disclosure may include:
an acquisition unit 702 for acquiring three-dimensional volume data, reconstruction geometry parameters and two-dimensional projection data of the first object from the CBCT;
an identification and classification unit 704, configured to perform identification and classification of a region of interest according to the three-dimensional volume data, the reconstruction geometric parameters, and the two-dimensional projection data of the first object, so as to obtain region of interest information of the first object;
a local reconstruction unit 706, configured to perform local tangent plane reconstruction according to the region of interest information of the first object, the reconstruction geometry parameters, and the two-dimensional projection data, so as to obtain a first report mapping of the first object;
a report generating unit 708, configured to generate an image report to be confirmed of the first object, where the image report to be confirmed of the first object includes the first report matching;
the image report auditing unit 710 is configured to provide the to-be-confirmed image report of the first object to a first user, so that the first user can audit the to-be-confirmed image report of the first object.
In some embodiments, the image report auditing unit 710 may be further configured to mark the image report to be confirmed of the first object as a confirmation image report of the first object in response to a confirmation message of the image report to be confirmed of the first object generated by the confirmation operation of the first user, so as to show the confirmation image report of the first object to a second user.
In some embodiments, the image report generating device 700 may further include: a rendering unit 712, configured to render a visualized three-dimensional image of the first object using three-dimensional volume data of the first object; the image report auditing unit 710 may be further configured to provide the first user with a visual three-dimensional image of the first object, so that the first user can audit the image report to be confirmed of the first object by combining the visual three-dimensional image of the first object.
In some embodiments, the local reconstruction unit 706 is further configured to perform local slice reconstruction according to the selected region information, the reconstruction geometry parameters, and the two-dimensional projection data of the first object, and obtain a second reported mapping of the first object; the report generating unit 708 may further be configured to adjust the to-be-confirmed image report of the first object according to the second report matching map, so that the to-be-confirmed image report of the first object includes the second report matching map; wherein the selected region information of the first object is generated in response to a mapping modification operation and/or a mapping reconstruction operation of the first user.
In some embodiments, the map modification operation and/or the map reconstruction operation are performed at a presentation interface of the visualized three-dimensional image of the first object.
In some embodiments, the region of interest information includes a category and an attribute of the region of interest; the report generating unit 708 may be further configured to generate a first text description according to the category and the attribute of the region of interest, and add the first text description to the to-be-confirmed image report of the first object.
In some embodiments, the report generating unit 708 is further configured to adjust the to-be-confirmed image report of the first object according to the second textual description of the first object, so that the to-be-confirmed image report of the first object includes the second textual description; wherein the second textual description of the first object is generated in response to a textual editing operation by the first user.
In some embodiments, the image report generation system may further include: and a visual report distribution unit 714 for providing the second user with a confirmation visual report and/or a visual three-dimensional image of the first object in response to the viewing operation of the second user.
In some embodiments, the visual report distribution unit 714 may be further configured to provide an MPR interface, and the MPR interface may be configured to present the visual three-dimensional image and/or the multi-planar slice image of the corresponding visual three-dimensional image to the first user and/or the second user.
In some embodiments, the image report generation system may further include: a model training unit 716. The identification and classification unit 704 is specifically configured to identify and classify a region of interest through an artificial intelligence model; the model training unit 716 may be configured to update parameters of the artificial intelligence model according to the confirmation image report, the three-dimensional volume data, the reconstruction geometry parameters, and the two-dimensional projection data of the first object, so as to improve the accuracy of the artificial intelligence model.
In some embodiments, the artificial intelligence model includes a convolutional neural network and a multi-label classification network connected in sequence, the convolutional neural network is used for realizing the region of interest identification, and the multi-label classification network is used for realizing the region of interest classification.
In some embodiments, the local reconstruction unit 706 is specifically configured to perform local slice reconstruction using the FDK filtered back-projection algorithm.
In some embodiments, the report generating unit 708 is further configured to record the region of interest information in the first report mapping in the to-be-confirmed image report; and/or recording the selected area information in a second report matching picture in the image report to be confirmed.
In some embodiments, the image report auditing unit 710 may be further configured to record the region-of-interest information in the first report matching image in the confirmation image report; and/or recording the selected area information in a second report matching picture in the confirmation image report.
In practical applications, the image report generation apparatus 700 and its respective units may be implemented by software, hardware, or a combination of both.
As shown in fig. 8, the image report generation system according to an embodiment of the present disclosure may include: cloud server 1200, CBCT or CBCT host 1300;
the cloud server 1200 may include a data storage unit 1202, a report storage unit 1204, and the aforementioned image report generation apparatus 700;
the data storage unit 1202 may be configured to store three-dimensional volume data, reconstruction geometry parameters, and two-dimensional projection data of a first object from the CBCT or CBCT host 1300;
the report storage unit 1204 may be configured to store the to-be-confirmed image report, the confirmed image report, and/or the visual three-dimensional image of the first object obtained by the image report generating apparatus 700.
In some embodiments, the cloud server 1200 may further include: a local reconstruction interface 1208, a data rendering interface 1206; the identification and classification unit 704, the image report auditing unit 710, and the report storage unit 1204 may respectively call the local reconstruction unit 706 through the local reconstruction interface 1208. The image report auditing unit 710, the image report distributing unit 714, and/or the report storing unit 1204 may respectively call the rendering unit 712 through the data rendering interface 1206.
In some embodiments, the image report generation system may further include: cloud services adapter 1400, cloud services adapter 1400 may be used to transmit three-dimensional volume data, reconstruction geometry parameters, and two-dimensional projection data of a first object from CBCT or CBCT host 1300 to cloud server 1200. That is, the cloud service adapter 1400 is used for communication between the CBCT or CBCT host 1300 and the cloud server 1200, transmits data of the CBCT or CBCT host 1300 to the cloud server 1200, and supports transmission of DICOM data, RAM data, and the like. In a specific application, the cloud service adapter 1200 can use the DICOM protocol to communicate with a conventional image device (e.g., CBCT or CBCT host 1300), and also use a special data interface to transmit other data.
In some embodiments, the image report auditing unit 710 may be a first Web service for enabling interaction between the image report generating device 700 and a first user (i.e., a doctor). For details of the first Web service, reference may be made to the related descriptions above, and details are not repeated.
In some embodiments, the visual report distribution unit 714 may be a second Web service for enabling interaction between the visual report generating device 700 and a second user (i.e., a patient). For details of the second Web service, reference may be made to the related descriptions above, and details are not repeated.
In some embodiments, the data storage unit 1202 and the report storage unit 1204 may be different storage spaces in the same cloud data storage, respectively; alternatively, the data storage unit 1202 and the report storage unit 1204 may be different cloud data storages.
In some embodiments, the recognition classification unit 704 may be, but is not limited to, an artificial intelligence model deployed on the cloud server 1200. For details of the artificial intelligence model, reference may be made to the above related descriptions, and details are not repeated.
When the image report generation system is used, after the CBCT finishes shooting and scanning, data (including DICOM data, two-dimensional projection data and reconstruction geometric parameters generated after three-dimensional reconstruction) related to a patient are uploaded to a data storage unit through lossless compression and data desensitization through a cloud service adapter, the data are scanned and analyzed through a recognition and classification unit realized based on an artificial intelligence model, and a focus part or a suspected focus part is intelligently screened and an image report to be confirmed is generated. And then, the image report to be confirmed is stored in the report storage unit, the image report auditing unit pushes the image report to be confirmed to a doctor for reviewing and revising, the doctor revises and confirms to generate a confirmation image report, and the confirmation image report is output to the patient through the image report distributing unit, so that the patient can view the accurate medical image report. Meanwhile, the confirmation image report can be sent to the recognition classification unit through a cloud service feedback interface and the like for self-learning and iteration to continuously strengthen the stability and accuracy of the artificial intelligence model, and the recognition accuracy of the recognition classification unit on the focus is improved.
The image report generation system disclosed by the invention combines the automatic generation technology and the cloud service technology of the artificial intelligence medical image report, innovatively adds various functions which are convenient for users and doctors to use, not only can the artificial intelligence neural network realize self-learning and self-iteration on the system, but also decouples the display mode and software and hardware of the medical image, and adds some functions which can better display image results and data, greatly facilitates patients, also improves the working efficiency of doctors, not only enables the doctors to be liberated from the work of writing and checking the heavy medical image report, but also enables the patients to visually and clearly know the focus and the state of an illness.
The present disclosure also provides an electronic device, including: a memory storing execution instructions; and the processor or other hardware module executes the execution instructions stored in the memory, so that the processor or other hardware module executes the image report generation method.
The disclosure also provides a readable storage medium, in which an execution instruction is stored, and the execution instruction is executed by a processor to implement the image report generation method.
In the description herein, reference to the description of the terms "one embodiment/implementation," "some embodiments/implementations," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment/implementation or example is included in at least one embodiment/implementation or example of the present application. In this specification, the schematic representations of the terms described above are not necessarily the same embodiment/mode or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments/modes or examples. Furthermore, the various embodiments/aspects or examples and features of the various embodiments/aspects or examples described in this specification can be combined and combined by one skilled in the art without conflicting therewith.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
It will be understood by those skilled in the art that the foregoing embodiments are merely for clarity of illustration of the disclosure and are not intended to limit the scope of the disclosure. Other variations or modifications may occur to those skilled in the art, based on the foregoing disclosure, and are still within the scope of the present disclosure.

Claims (10)

1. An image report generation method, comprising:
acquiring three-dimensional volume data, reconstruction geometric parameters and two-dimensional projection data of a first object from the CBCT;
according to the three-dimensional volume data, the reconstruction geometric parameters and the two-dimensional projection data of the first object, executing identification and classification of the region of interest to obtain region of interest information of the first object;
according to the region-of-interest information of the first object, the reconstruction geometric parameters and the two-dimensional projection data, local tangent plane reconstruction is carried out, and a first report matching image of the first object is obtained;
and generating an image report to be confirmed of the first object, wherein the image report to be confirmed of the first object comprises the first report matching so as to show the image report to be confirmed of the first object to a first user, so that the first user can audit the image report to be confirmed of the first object.
2. The image report generation method according to claim 1, further comprising: in response to a confirmation message generated by the confirmation operation of the first user and corresponding to the image report to be confirmed of the first object, marking the image report to be confirmed of the first object as a confirmation image report of the first object so as to show the confirmation image report of the first object to a second user;
preferably, the method further comprises the following steps: and rendering the three-dimensional data of the first object to obtain a visual three-dimensional image of the first object so as to display the visual three-dimensional image of the first object to a first user, so that the first user can audit the image report to be confirmed of the first object by combining the visual three-dimensional image of the first object.
3. The image report generation method according to claim 1, further comprising:
according to the selected region information of the first object, the reconstruction geometric parameters and the two-dimensional projection data, local tangent plane reconstruction is carried out, and a second report matching picture of the first object is obtained;
adjusting the image report to be confirmed of the first object according to the second report matching picture, so that the image report to be confirmed of the first object comprises the second report matching picture;
wherein the selected region information of the first object is generated in response to a map modification operation and/or a map reconstruction operation of the first user;
preferably, the map-matching modification operation and/or the map-matching reconstruction operation are/is performed on a presentation interface of a visualized three-dimensional image of the first object.
4. The image report generation method according to claim 1,
the region-of-interest information comprises a category and an attribute of a region of interest;
the generating of the to-be-confirmed image report of the first object includes: generating a first text description according to the category and the attribute of the region of interest, and adding the first text description into an image report to be confirmed of the first object;
preferably, the image report generation method further includes: adjusting the image report to be confirmed of the first object according to the second word description of the first object, so that the image report to be confirmed of the first object comprises the second word description; wherein the second textual description of the first object is generated in response to a textual editing operation by the first user.
5. The image report generation method according to claim 2,
the image report generation method further comprises the following steps: in response to a viewing operation of a second user, displaying a confirmation image report and/or a visual three-dimensional image of the first object to the second user;
preferably, the identification and classification of the region of interest is realized by an artificial intelligence model; the image report generation method further comprises the following steps: updating parameters of the artificial intelligence model according to the confirmed image report, the three-dimensional volume data, the reconstructed geometric parameters and the two-dimensional projection data of the first object so as to improve the precision of the artificial intelligence model;
preferably, the artificial intelligence model comprises a convolutional neural network and a multi-label classification network which are connected in sequence, wherein the convolutional neural network is used for realizing the region of interest identification, and the multi-label classification network is used for realizing the region of interest classification;
preferably, the local tangent plane reconstruction is performed by using an FDK filtered back-projection algorithm;
preferably, the region-of-interest information is recorded in a first report matching picture in the confirmation image report and/or the image report to be confirmed; and/or the selected area information is recorded in a second report matching picture in the confirmation image report and/or the image report to be confirmed;
preferably, the method further comprises the following steps: providing a multi-planar reconstruction (MPR) interface for a first user and/or a second user to present the visualized three-dimensional image and/or a multi-planar slice image corresponding to the visualized three-dimensional image.
6. An image report generation device, comprising:
an acquisition unit for acquiring three-dimensional volume data, reconstruction geometry parameters and two-dimensional projection data of a first object from a CBCT;
the identification and classification unit is used for executing identification and classification of an interested region according to the three-dimensional volume data, the reconstruction geometric parameters and the two-dimensional projection data of the first object to obtain interested region information of the first object;
a local reconstruction unit, configured to perform local tangent plane reconstruction according to the region-of-interest information of the first object, the reconstruction geometric parameter, and the two-dimensional projection data, to obtain a first report mapping of the first object;
a report generating unit, configured to generate an image report to be confirmed of the first object, where the image report to be confirmed of the first object includes the first report matching;
the image report auditing unit is used for providing an image report to be confirmed of the first object for a first user, so that the first user can audit the image report to be confirmed of the first object;
preferably, the image report auditing unit is further configured to mark the image report to be confirmed of the first object as a confirmation image report of the first object in response to a confirmation message of the image report to be confirmed, which is generated by the first user in response to a confirmation operation, so as to show the confirmation image report of the first object to a second user;
preferably, the method further comprises the following steps: a rendering unit, configured to render a visualized three-dimensional image of the first object by using three-dimensional volume data of the first object; the image report auditing unit is further used for providing a visual three-dimensional image of the first object for a first user, so that the first user can audit the image report to be confirmed of the first object by combining the visual three-dimensional image of the first object;
preferably, the local reconstruction unit is further configured to perform local tangent plane reconstruction according to the selected region information of the first object, the reconstruction geometric parameter, and the two-dimensional projection data, so as to obtain a second report mapping of the first object; the report generating unit is further configured to adjust the to-be-confirmed image report of the first object according to the second report matching map, so that the to-be-confirmed image report of the first object includes the second report matching map; wherein the selected region information of the first object is generated in response to a map modification operation and/or a map reconstruction operation of the first user;
preferably, the map-making modification operation and/or the map-making reconstruction operation are/is performed on a presentation interface of a visualized three-dimensional image of the first object;
preferably, the region of interest information includes a category and an attribute of the region of interest; the report generating unit is further configured to generate a first textual description according to the category and the attribute of the region of interest, and add the first textual description to an image report to be confirmed of the first object;
preferably, the report generating unit is further configured to adjust the to-be-confirmed image report of the first object according to a second textual description of the first object, so that the to-be-confirmed image report of the first object includes the second textual description; wherein the second textual description of the first object is generated in response to a textual editing operation by the first user;
preferably, the method further comprises the following steps: a visual report distribution unit for providing a confirmation visual report and/or a visual three-dimensional image of the first object to a second user in response to a viewing operation of the second user;
preferably, the identification and classification unit is specifically configured to implement identification and classification of the region of interest through an artificial intelligence model; the model training unit is used for updating parameters of the artificial intelligence model according to the confirmed image report, the three-dimensional volume data, the reconstructed geometric parameters and the two-dimensional projection data of the first object so as to improve the precision of the artificial intelligence model;
preferably, the artificial intelligence model comprises a convolutional neural network and a multi-label classification network which are connected in sequence, wherein the convolutional neural network is used for realizing the region of interest identification, and the multi-label classification network is used for realizing the region of interest classification;
preferably, the local reconstruction unit is specifically configured to perform the local tangent plane reconstruction by using an FDK filtered back projection algorithm;
preferably, the report generating unit is further configured to record the region-of-interest information in a first report mapping in the to-be-confirmed image report; and/or recording the selected area information in a second report matching picture in the image report to be confirmed; the image report auditing unit is also used for recording the region-of-interest information in a first report matching picture in the confirmed image report; and/or, recording the selected area information in a second report matching picture in the confirmation image report;
preferably, the video report distribution unit is further configured to provide a multi-planar reconstruction MPR interface, where the MPR interface is configured to present the visualized three-dimensional image and/or a multi-planar slice image corresponding to the visualized three-dimensional image to a first user and/or a second user.
7. An electronic device, comprising:
a memory storing execution instructions; and
a processor executing execution instructions stored by the memory to cause the processor to perform the image report generation method of any of claims 1 to 5.
8. A readable storage medium, wherein the readable storage medium stores executable instructions, and the executable instructions are executed by a processor to implement the image report generation method according to any one of claims 1 to 5.
9. An image report generation system, comprising: a CBCT and a cloud server; wherein the cloud server comprises a data storage unit, a report storage unit, and the image report generating device of claim 6;
the data storage unit is used for storing three-dimensional volume data, reconstruction geometric parameters and two-dimensional projection data of a first object from the CBCT;
the report storage unit is used for storing the image report to be confirmed, the confirmation image report and/or the visual three-dimensional image of the first object, which are obtained by the image report generating device.
10. The image report generation system of claim 9,
the cloud server further comprises: a local reconstruction interface and a data rendering interface;
the identification classification unit, the image report auditing unit and the report storage unit respectively call the local reconstruction unit through the local reconstruction interface;
the image report auditing unit, the image report distributing unit and/or the report storing unit respectively call the rendering unit through the data rendering interface;
preferably, the image report generation system further includes: a cloud service adapter to transmit three-dimensional volume data, reconstruction geometry parameters, and two-dimensional projection data of a first object from the CBCT to the cloud server.
CN202210495359.5A 2022-05-07 2022-05-07 Image report generation method, device, system, equipment and storage medium Pending CN114864035A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210495359.5A CN114864035A (en) 2022-05-07 2022-05-07 Image report generation method, device, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210495359.5A CN114864035A (en) 2022-05-07 2022-05-07 Image report generation method, device, system, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114864035A true CN114864035A (en) 2022-08-05

Family

ID=82635941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210495359.5A Pending CN114864035A (en) 2022-05-07 2022-05-07 Image report generation method, device, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114864035A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116227238A (en) * 2023-05-08 2023-06-06 国网安徽省电力有限公司经济技术研究院 Operation monitoring management system of pumped storage power station
CN116703908A (en) * 2023-08-04 2023-09-05 有方(合肥)医疗科技有限公司 Imaging system testing method and device and imaging system
WO2024037109A1 (en) * 2022-08-16 2024-02-22 珠海赛纳数字医疗技术有限公司 Display method and apparatus, and device and storage medium
WO2024141041A1 (en) * 2022-12-30 2024-07-04 上海时代天使医疗器械有限公司 Oral cavity model display method and apparatus, multi-planar reformation method and apparatus, device, and readable storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024037109A1 (en) * 2022-08-16 2024-02-22 珠海赛纳数字医疗技术有限公司 Display method and apparatus, and device and storage medium
WO2024141041A1 (en) * 2022-12-30 2024-07-04 上海时代天使医疗器械有限公司 Oral cavity model display method and apparatus, multi-planar reformation method and apparatus, device, and readable storage medium
CN116227238A (en) * 2023-05-08 2023-06-06 国网安徽省电力有限公司经济技术研究院 Operation monitoring management system of pumped storage power station
CN116703908A (en) * 2023-08-04 2023-09-05 有方(合肥)医疗科技有限公司 Imaging system testing method and device and imaging system
CN116703908B (en) * 2023-08-04 2023-10-24 有方(合肥)医疗科技有限公司 Imaging system testing method and device and imaging system

Similar Documents

Publication Publication Date Title
CN114864035A (en) Image report generation method, device, system, equipment and storage medium
US6819785B1 (en) Image reporting method and system
EP2888686B1 (en) Automatic detection and retrieval of prior annotations relevant for an imaging study for efficient viewing and reporting
JP2005510324A (en) Handling of image data generated by image data set operations
US20080118129A1 (en) Cursor Mode Display System and Method
US11037660B2 (en) Communication system for dynamic checklists to support radiology reporting
CN110400617A (en) The combination of imaging and report in medical imaging
KR102405314B1 (en) Method and system for real-time automatic X-ray image reading based on artificial intelligence
US10537292B2 (en) Automated calibration and quality assurance of medical images
CN111063422A (en) Medical image labeling method, device, equipment and medium
CN111986182A (en) Auxiliary diagnosis method, system, electronic device and storage medium
US20230351599A1 (en) Systems and methods for processing electronic images of slides for a digital pathology workflow
US8923582B2 (en) Systems and methods for computer aided detection using pixel intensity values
US10176569B2 (en) Multiple algorithm lesion segmentation
JP2007072649A (en) Diagnostic reading report preparation device
CN112862752A (en) Image processing display method, system electronic equipment and storage medium
US10886029B2 (en) 3D web-based annotation
US10373345B2 (en) Adaptive image display characteristics
KR102417531B1 (en) Apparatus for Generating Learning Data and Driving Method Thereof, and Computer Readable Recording Medium
US20220215962A1 (en) Image diagnosis support device, operation method of image diagnosis support device, and operation program of image diagnosis support device
Galanopoulos et al. WCL-Viewer: An integrated system for medical image administration and processing
CN117373603A (en) Image report generation method, device, equipment, storage medium and program product
CN118824455A (en) Method and device for generating report, storage medium and electronic equipment
CN112687387A (en) Artificial intelligence auxiliary diagnosis system and diagnosis method
Berry Digital Imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Building A1, National Health Big Data Industrial Park, at the intersection of Xiyou Road and Kongquetai Road, High-tech Zone, Hefei City, Anhui Province, 230088

Applicant after: HEFEI YOFO MEDICAL TECHNOLOGY Co.,Ltd.

Address before: 238000 Zhongke advanced manufacturing innovation industrial park, Anhui Juchao Economic Development Zone, No.2 Qilu Road, Chaohu City, Hefei City, Anhui Province

Applicant before: HEFEI YOFO MEDICAL TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information