CN114305505A - AI auxiliary detection method and system for breast three-dimensional volume ultrasound - Google Patents

AI auxiliary detection method and system for breast three-dimensional volume ultrasound Download PDF

Info

Publication number
CN114305505A
CN114305505A CN202111623489.4A CN202111623489A CN114305505A CN 114305505 A CN114305505 A CN 114305505A CN 202111623489 A CN202111623489 A CN 202111623489A CN 114305505 A CN114305505 A CN 114305505A
Authority
CN
China
Prior art keywords
information
focus
image
lesion
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111623489.4A
Other languages
Chinese (zh)
Other versions
CN114305505B (en
Inventor
张伟
谢卓衡
田旭东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Softprobe Medical Systems Inc
Original Assignee
Softprobe Medical Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Softprobe Medical Systems Inc filed Critical Softprobe Medical Systems Inc
Priority to CN202111623489.4A priority Critical patent/CN114305505B/en
Publication of CN114305505A publication Critical patent/CN114305505A/en
Application granted granted Critical
Publication of CN114305505B publication Critical patent/CN114305505B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Ultra Sonic Daignosis Equipment (AREA)

Abstract

The invention provides an AI auxiliary detection method of mammary gland three-dimensional volume ultrasound, which comprises the steps of detecting a three-dimensional scanning image through AI, and obtaining a focus area of the three-dimensional scanning image; displaying an AI mark on a lesion area of the three-dimensional scan image; when an image annotation instruction is received, displaying an image annotation page and acquiring initial annotation information; when the AI mark is activated, calling the characteristic information of the corresponding focus area based on the category of the initial marking information, and displaying the characteristic information in the focus area; wherein the characteristic information comprises a lesion characteristic value of the lesion area and BI-RADS grading information. The AI-assisted lesion detection provided by the invention greatly saves time spent by doctors in subsequent measurement and other operations of lesions, enables the doctors to focus on reading and finding the lesions, reduces missed diagnosis and saves time and energy of the doctors.

Description

AI auxiliary detection method and system for breast three-dimensional volume ultrasound
Technical Field
The invention relates to the technical field of ultrasonic detection, in particular to an AI auxiliary detection method and system for breast three-dimensional volume ultrasound.
Background
Compared with common two-dimensional manual ultrasound, the existing three-dimensional breast ultrasound has the following advantages: the coronal plane information is provided, accurate positioning measurement of large and multiple masses is facilitated, and the volume measurement is more accurate.
However, because the three-dimensional ultrasound information amount is huge, a great amount of manual image reading work is needed in the diagnosis interactive process, complicated operations such as manually measuring the length and the length of a focus, manually selecting the characteristics of the focus, manually inputting report contents and the like are needed after the focus is found, a doctor needs to spend great time and effort on the manual operations, and cannot focus on the image reading, so that the fatigue of the doctor is easily caused, and the condition of missed diagnosis is easily caused.
Meanwhile, the doctor has a slow film reading speed, and cannot fully exert the advantages of three-dimensional breast ultrasound in large-scale physical examination screening or two-cancer screening scenes.
Disclosure of Invention
In order to solve the problems, the invention provides an AI auxiliary detection method and system of breast three-dimensional volumetric ultrasound.
In order to achieve the above object of the present invention, the present invention is achieved by the following techniques:
the invention provides an AI auxiliary detection method of mammary gland three-dimensional volume ultrasound, which comprises the following steps:
detecting a three-dimensional scanning image through AI to obtain a focus area of the three-dimensional scanning image;
displaying an AI mark on a lesion area of the three-dimensional scan image;
when an image annotation instruction is received, displaying an image annotation page and acquiring initial annotation information;
when the AI mark is activated, calling the characteristic information of the corresponding focus area based on the category of the initial marking information, and displaying the characteristic information in the focus area;
wherein the characteristic information comprises a lesion characteristic value of the lesion area and BI-RADS grading information.
In some embodiments, after the detecting the three-dimensional scanning image by AI and acquiring the lesion area of the three-dimensional scanning image, the method further includes:
and generating a cross-sectional image of the focus region based on the size of the focus characteristic value and the corresponding focus region, so that the user can select the focus region to be marked.
In some embodiments, further comprising:
when the auxiliary measurement of the cross-section image is triggered, acquiring a current detection frame of the focus area;
after the current detection frame is obtained, displaying an image annotation page and obtaining current annotation information;
and displaying the focus characteristic value and the BI-RADS grading information in the focus area based on the current marking information, the focus characteristic value of the focus area in the current detection frame and the BI-RADS grading information.
In some embodiments, further comprising:
after the current marking information is obtained, displaying an auxiliary measuring line of the current detection frame;
and correcting the auxiliary measuring line according to the adjusting instruction of the auxiliary measuring line, and performing auxiliary measurement after the correction is determined to be completed.
In some embodiments, the detecting the three-dimensional scanning image by AI, and acquiring the lesion region of the three-dimensional scanning image, includes:
and detecting the three-dimensional scanning image through AI, and acquiring a lesion characteristic value in the three-dimensional scanning image to determine the lesion area.
In some embodiments, further comprising:
and generating a corresponding AI recommendation value of the BI-RADS based on the BI-RADS classification information for the user to refer to.
In some embodiments, further comprising:
when a report generation instruction is received, generating an auxiliary detection report based on the labeling information, wherein the auxiliary detection report comprises basic information, an ultrasonic image, diagnosis information and suggestion information;
wherein the annotation information comprises shape, orientation, edge, lesion boundary, echo, posterior echo, surrounding tissue, and BI-RADS rating information of the marker location.
An AI-assisted detection system of breast three-dimensional volumetric ultrasound, comprising:
the AI detection module is used for detecting the three-dimensional scanning image through AI to obtain a focus area of the three-dimensional scanning image;
the display module is used for displaying an AI mark on a focus area of the three-dimensional scanning image;
the display module is also used for displaying an image annotation page and acquiring initial annotation information when receiving an image annotation instruction;
the display module is further configured to, when the AI marker is activated, call feature information of a corresponding lesion area based on the category of the initial labeling information, and display the feature information in the lesion area;
wherein the characteristic information comprises a lesion characteristic value of the lesion area and BI-RADS grading information.
In some embodiments, the method further comprises:
and generating a cross-sectional image of the focus region based on the size of the focus characteristic value and the corresponding focus region, so that the user can select the focus region to be marked.
In some embodiments, the system further comprises an auxiliary measurement module for:
when the auxiliary measurement of the cross-section image is triggered, acquiring a current detection frame of the focus area;
after the current detection frame is obtained, displaying an image annotation page and obtaining current annotation information;
and displaying the focus characteristic value and the BI-RADS grading information in the focus area based on the current marking information, the focus characteristic value of the focus area in the current detection frame and the BI-RADS grading information.
The AI auxiliary detection method and the AI auxiliary detection system for the breast three-dimensional volume ultrasound provided by the invention at least have the following beneficial effects:
1) the invention provides the display prompt of the AI auxiliary focus detection result, the automatic auxiliary measurement of the focus selected by the doctor, the automatic calculation and classification of the focus characteristics, the automatic classification prompt of BI-RADS classification, and the automatic auxiliary interaction function of the whole process of automatically generating the AI report template, etc., thereby greatly saving the time spent by the doctor on the subsequent measurement of the focus, etc., leading the doctor to focus on reading the picture to find the focus, reducing the missed diagnosis condition and saving the time and the energy of the doctor.
2) The invention also improves the speed of batch image reading of doctors, is beneficial to the realization of a scanning and separating mode, and provides feasibility for the application of the three-dimensional breast ultrasonic equipment in large-scale physical examination screening or two-cancer screening scenes.
Drawings
The above features, technical features, advantages and implementations of a breast three-dimensional volumetric ultrasound AI-assisted detection method and system will be further described in the following detailed description of preferred embodiments with reference to the accompanying drawings.
FIG. 1 is a schematic diagram of an embodiment of an AI-assisted detection method of breast three-dimensional volumetric ultrasound in accordance with the present invention;
FIG. 2 is a schematic diagram of multiple selected cases in the present invention;
FIG. 3 is a schematic illustration of the AI marker display in the view of the present invention;
FIG. 4 is a schematic diagram of the ROI area display of the AI detection marker in the view of the present invention;
FIG. 5 is an interface diagram of image annotation in the present invention;
FIG. 6 is a schematic view of the manual labeling implementation of the present invention;
FIG. 7 is a schematic cross-sectional view of the present invention;
FIG. 8 is a schematic illustration of an auxiliary measurement display according to the present invention;
FIG. 9 is a schematic illustration of modifying an auxiliary measurement label in accordance with the present invention;
FIG. 10 is a schematic diagram of automatic measurement line generation after the auxiliary measurement function is marked;
FIG. 11 is a schematic diagram of a report generating panel in the present invention;
FIG. 12 is a schematic diagram of AI template modification in accordance with the invention;
FIG. 13 is a schematic illustration of a detection report according to the present invention;
FIG. 14 is a flow chart of the original labeling in the prior art;
FIG. 15 is a flowchart of an AI-assisted detection method of breast three-dimensional volumetric ultrasound according to the present invention;
FIG. 16 is a diagram illustrating the mouse performing/re-performing AI-assisted detection on a right-click interface in accordance with the present invention;
fig. 17 is a schematic diagram of the auxiliary measurement in the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. However, it will be apparent to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
It should be noted that fig. 14 is a flow of original film reading and labeling of the automatic breast volume ultrasound, which is a comparison scheme of the present application. If AI focus detection is not available and the function related to auxiliary measurement is not available, focus characteristics are manually selected, BI-RADS grades are manually selected, measurement lines are manually drawn, and a reading report is manually generated. The required reading time of the doctor is long, and the complexity of the interactive operation is high.
In view of the above problems, the AI-assisted detection process of the present automatic breast volume ultrasound is shown in fig. 15. The AI focus auxiliary detection is adopted, so that the efficiency of finding the focus by reading the picture of a doctor is improved; through the auxiliary measurement function, the characteristics of the filled focus are automatically calculated, the long and short axis measurement lines are automatically generated, and BI-RADS level prompt is performed, so that the time for marking the focus by a doctor is reduced, and the complexity of interactive operation is reduced. And finally, the AI report is automatically generated, so that the time for reporting by a doctor is saved.
In the image reading test of 100 cases of data of 7 doctors, the doctor uses the auxiliary detection film reading process of the invention, compared with the manual operation, the time from the film reading to the report of the whole process can be reduced by more than 30 percent; meanwhile, the focus detection effect is averagely improved by 5.59 percent.
In one embodiment, as shown in fig. 1, the present invention provides an AI-assisted detection method of breast three-dimensional volumetric ultrasound, comprising:
s101, detecting a three-dimensional scanning image through AI to obtain a focus area of the three-dimensional scanning image.
S102, displaying an AI mark on the lesion area of the three-dimensional scanning image.
S103, when the image annotation instruction is received, displaying an image annotation page and acquiring initial annotation information.
S104, when the AI mark is activated, calling the characteristic information of the corresponding focus area based on the category of the initial marking information, and displaying the characteristic information in the focus area; the characteristic information comprises a focus characteristic value of the focus area and BI-RADS grading information.
Specifically, a scanning workstation scans a patient to obtain a three-dimensional scanning image, and when AI auxiliary detection is performed, the following steps are performed:
1. scan data (a scan image of the patient) is imported.
2. As shown in fig. 2, after one or more cases are selected in the patient list, the "execute AI" function button is clicked to perform or re-perform the auxiliary tests. When the auxiliary detection is executed again, the last detection result is updated.
In addition: coronal click right button appearance menu bar in arbitrary view mode may perform or re-perform auxiliary detection on the current single view. As shown in fig. 16.
(1) If the current view is executing the auxiliary detection function, the AI function is not allowed to be executed, and an interface for executing the AI function is not provided in a menu bar popped up by right-clicking the view.
(2) If the auxiliary detection function is not executed in the current view or the detection function is executed completely, the AI function can be executed, and an interface for executing the AI function is provided in a menu bar popped up by right-clicking the view.
3. As shown in fig. 3, after AI-assisted detection, a green AI marker is automatically added at the site where the lesion is screened. The marker display states are divided into two types: circles and solid dots.
(1) Circle:
the unseen AI markers appear as green circles with a radius sized to be adjustable in the configuration interface in supervisor mode. The AI indicia currently being viewed appear as a circle.
(2) Dot:
clicking on the AI tag indicates that the tag has completed viewing. When not activated, the indicia appear as a solid green dot.
(3) When the AI mark is clicked, the display area of the text at the lower left corner of the view coronal plane will display the malignancy probability score value corresponding to the current activation mark.
Wherein, activating the AI marker may be by selecting the current AI marker position with the mouse.
4. After AI auxiliary detection, the position of the screened focus can generate a corresponding cross-section focus screenshot. In full view mode, displayed on the ROI area below the interface view. As shown in fig. 4.
(1) The order of arrangement of the ROI plots is from large to small according to the severity of the score. When the number of the pictures is too many, a scroll bar is automatically generated below the ROI area for dragging to see the following pictures.
(2) And previewing the image scanning position abbreviation displayed at the lower left corner of the auxiliary detection result, wherein the corresponding specific content is shown in the image scanning position and abbreviation list of fig. 4.
The process is the above steps: importing data, clicking to execute AI, and displaying the position of the suspected focus detected by the AI. The AI auxiliary detection algorithm is to perform Mask R-CNN focus detection layer by layer on the whole DCM image to obtain a suspected focus position. And then inputting the cross section ROI and the coronal plane ROI at the position of the suspected focus into a classification model, performing false positive filtration, and finally reserving a plurality of focuses with higher malignant values for display.
5. Image labeling:
by clicking the "label" in the control panel, the suspicious lesion in the image can be labeled, and the label interface is shown in fig. 5. Information such as the shape, orientation, edge, lesion boundary, echo, posterior echo, surrounding tissue, and BI-RADS rating of the marker location is selected in the annotation interface.
A suspicious lesion as used herein refers to a suspected lesion location as indicated by an AI test, or a location that is deemed by other physicians to be a suspicious lesion.
In addition, calcifications, pathology, biopsy and special cases can be selected in the drop-down box. The user can add evaluation to the marked part according to own judgment and fill related information in the remarking area.
After AI detection, a doctor checks and filters an AI detection result, and selects the center point of the cross section with the maximum section of the suspicious lesion position considered by the doctor for marking.
The schematic is shown in fig. 6, with yellow circles representing user-added unselected lesions and purple circles representing selected lesions. Displaying the corresponding sequence number according to the image labeling sequence: 1. 2, 3, 4, etc.
Additionally, the annotation may also be re-edited, displayed/hidden, copied/pasted, deleted, BI-RADS annotation information.
And selecting a certain label, clicking the label point by a right mouse button, selecting the 'editing label' in the menu, and re-editing the current label information.
Clicking a 'display mark' button in a tool bar on the right side of the interface to view all manual annotation information in the image; and clicking a 'hidden mark' button, the manual labeling information in the image can be hidden.
A certain label is selected, the label is clicked by right clicking of a mouse, and the label can be copied by selecting 'copy label' in a menu. Clicking the position to be pasted by using the left mouse button, then clicking the right mouse button, selecting 'paste label' in the menu, and pasting the label on the selected position.
Selecting a certain manual label, clicking the label point by right click of the mouse, and selecting 'delete label' in the appearing menu to delete the current label. Clicking the suspected focus in the coronal image, and displaying corresponding BI-RADS grading information at the lower left corner of the coronal image.
The auxiliary measurement function comprises automatic calculation of lesion features, automatic generation of lesion long and short axis measurement lines and automatic classification of lesion BI-RADS grades. Compared with the original film reading and labeling, the AI auxiliary detection process has the advantages of reducing some manual operation steps, saving time and improving efficiency.
In this embodiment, the present invention provides an automatic auxiliary interactive function of the whole process, such as AI-assisted lesion detection result display prompting, automatic auxiliary measurement of a lesion selected by a doctor, automatic calculation and classification of lesion features, BI-RADS classification automatic classification prompting, and automatic generation of an AI report template, so as to greatly save time spent by the doctor in subsequent measurement of the lesion, so that the doctor can focus on reading a map to find the lesion, reduce missed diagnosis, and save time and effort of the doctor.
In some embodiments, after the detecting the three-dimensional scanning image through AI in step S101, acquiring a lesion area of the three-dimensional scanning image, the method further includes:
and generating a cross-sectional image of the focus region based on the size of the focus characteristic value and the corresponding focus region, so that the user can select the focus region to be marked.
Specifically, the process includes the following steps: importing data, clicking to execute AI, and displaying the position of the suspected focus detected by the AI. In the algorithm, Mask R-CNN focus detection is carried out on the whole DCM image layer by layer to obtain a suspected focus position. And then inputting the cross section ROI and the coronal plane ROI at the position of the suspected focus into a classification model, performing false positive filtration, and finally reserving a plurality of focuses with higher malignant values for display.
Wherein, as shown in fig. 7, the cross section includes a sagittal plane, a coronal plane and a cross section from left to right.
In the embodiment, the focus with higher malignancy value can be preliminarily screened, the range of the later accurate investigation is reduced, and the efficiency and the accuracy are also improved.
In some embodiments, further comprising:
and when the auxiliary measurement of the cross-section image is triggered, acquiring a current detection frame of the focus area.
And after the current detection frame is obtained, displaying an image annotation page and obtaining current annotation information.
And displaying the focus characteristic value and the BI-RADS grading information in the focus area based on the current marking information, the focus characteristic value of the focus area in the current detection frame and the BI-RADS grading information.
The auxiliary measurement function comprises the steps of automatically calculating and filling focus characteristics, automatically generating long and short axis measurement lines, prompting BI-RADS grade and the like, and the operation flow is as follows:
(1) clicking the right button in the cross-sectional view selects an auxiliary measure.
(2) Dragging the mouse over the view draws a rectangular box and makes the rectangular box include exactly the entire measurement target, as shown in fig. 8, 17.
Wherein, dragging the mouse to draw the rectangular frame is manually operated. The extent of the rectangular box needs to include the lesion area on the current cross-sectional view, preferably just the lesion area. The whole measurement target means: a lesion area on the current cross-sectional view.
Illustratively, the rectangular box is drawn manually by the physician, and the drawing is required to include the lesion area on the current cross-sectional view, and preferably just the lesion area. The method aims to perform background focus segmentation, measuring line generation, focus feature calculation, focus grading and the like.
(3) After releasing the mouse, the callout interface pops up, as shown in FIG. 9. Automatically calculated lesion feature values are displayed and populated on an interface where a user can modify annotation information. Meanwhile, an AI recommended value for the currently labeled BI-RADS is provided in the interface, the user can refer to the recommended value when selecting labeled BI-RADS information, and a confirmation button is clicked after the selection is finished.
And updating a popped marking interface, and automatically judging the characteristics of the focus by an algorithm and filling and displaying the characteristics.
(4) After the labeling is confirmed, the interface displays auxiliary measuring lines (the focal length of the cross section and the focal length of the coronal plane), and the doctor can manually correct the measuring lines by dragging the two ends of the measuring lines with a mouse, as shown in fig. 10.
In this embodiment, the doctor can adjust the region of auxiliary measurement by oneself, decides the detection frame through the frame and carries out the accurate display mark, adjusts the auxiliary measurement line in addition, further makes detection and mark more accurate.
In some embodiments, further comprising:
and after the current marking information is acquired, displaying the auxiliary measuring line of the current detection frame.
And correcting the auxiliary measuring line according to the adjusting instruction of the auxiliary measuring line, and performing auxiliary measurement after the correction is determined to be completed.
Specifically, the generation of the auxiliary measuring line is based on the segmentation result of the lesion region of the cross-section and coronal plane image ROI and the subsequent PCA principal component analysis method for extracting the longest axis and the shortest axis.
In this embodiment, the main function of the automatic generation of the auxiliary measuring lines is to reduce the time for the doctor to manually drag four measuring lines, and improve the film reading efficiency.
When the end point of the measuring line given by the auxiliary measuring line is obviously not in the focus edge area, or the doctor does not accept the measuring result, the correction can be carried out manually. The manual correction method is that the two ends of the measuring line are dragged by a mouse to correct.
In some embodiments, the detecting the three-dimensional scanning image by AI, and acquiring the lesion region of the three-dimensional scanning image, includes:
and detecting the three-dimensional scanning image through AI, and acquiring a lesion characteristic value in the three-dimensional scanning image to determine the lesion area.
In some embodiments, further comprising:
and generating a corresponding AI recommendation value of the BI-RADS based on the BI-RADS classification information for the user to refer to.
Specifically, as shown in fig. 9, the automatic calculation of lesion feature values and filling of the auxiliary measurement function are shown here, and a lesion BI-RADS rating prompt recommended by AI is given.
In some embodiments, further comprising:
when a report generation instruction is received, an auxiliary detection report is generated based on the annotation information, and the auxiliary detection report comprises basic information, an ultrasonic image, diagnosis information and suggestion information.
Wherein the annotation information comprises shape, orientation, edge, lesion boundary, echo, posterior echo, surrounding tissue, and BI-RADS rating information of the marker location.
Specifically, as shown in fig. 11 and 12, generating the report specifically includes: clicking on "produce report" in the control panel can automatically generate a report according to the annotation information. The diagnostic report interface includes four parts: basic information, ultrasound images, diagnostic information templates, and physician recommendations. And clicking the AI template, selecting the focus required to appear in the report in the marking list, and automatically generating a structured AI report template, wherein the doctor can modify the template on the basis. Clicking on "generate report," the generate report style is shown in FIG. 13.
In the present embodiment, it is preferred that,
in one embodiment, an AI-assisted detection system for breast three-dimensional volumetric ultrasound, comprising:
and the AI detection module is used for detecting the three-dimensional scanning image through AI to obtain a focus area of the three-dimensional scanning image.
And the display module is used for displaying an AI mark on the lesion area of the three-dimensional scanning image.
The display module is further used for displaying the image annotation page and acquiring the initial annotation information when the image annotation instruction is received.
The display module is further configured to, when the AI marker is activated, call feature information of a corresponding lesion area based on the category of the initial labeling information, and display the feature information in the lesion area; the characteristic information comprises a focus characteristic value of the focus area and BI-RADS grading information.
In one embodiment, the method further comprises a generating module for:
and generating a cross-sectional image of the focus region based on the size of the focus characteristic value and the corresponding focus region, so that the user can select the focus region to be marked.
In one embodiment, the system further comprises an auxiliary measurement module for:
when the auxiliary measurement of the cross-section image is triggered, acquiring a current detection frame of the focus area;
after the current detection frame is obtained, displaying an image annotation page and obtaining current annotation information;
and displaying the focus characteristic value and the BI-RADS grading information in the focus area based on the current marking information, the focus characteristic value of the focus area in the current detection frame and the BI-RADS grading information.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of program modules is illustrated, and in practical applications, the above-described distribution of functions may be performed by different program modules, that is, the internal structure of the apparatus may be divided into different program units or modules to perform all or part of the above-described functions. Each program module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one processing unit, and the integrated unit may be implemented in a form of hardware, or may be implemented in a form of software program unit. In addition, the specific names of the program modules are only used for distinguishing the program modules from one another, and are not used for limiting the protection scope of the application.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or recited in detail in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely exemplary, and the division of the modules or units is merely an example of a logical division, and there may be other divisions when the actual implementation is performed, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An AI auxiliary detection method of breast three-dimensional volume ultrasound is characterized by comprising the following steps:
detecting a three-dimensional scanning image through AI to obtain a focus area of the three-dimensional scanning image;
displaying an AI mark on a lesion area of the three-dimensional scan image;
when an image annotation instruction is received, displaying an image annotation page and acquiring initial annotation information;
when the AI mark is activated, calling the characteristic information of the corresponding focus area based on the category of the initial marking information, and displaying the characteristic information in the focus area;
wherein the characteristic information comprises a lesion characteristic value of the lesion area and BI-RADS grading information.
2. The AI auxiliary detection method of breast three-dimensional volumetric ultrasound according to claim 1, further comprising, after acquiring a lesion region of the three-dimensional scan image by the AI detection three-dimensional scan image:
and generating a cross-sectional image of the focus region based on the size of the focus characteristic value and the corresponding focus region, so that the user can select the focus region to be marked.
3. The AI-assisted detection method of breast three-dimensional volumetric ultrasound according to claim 2, further comprising:
when the auxiliary measurement of the cross-section image is triggered, acquiring a current detection frame of the focus area;
after the current detection frame is obtained, displaying an image annotation page and obtaining current annotation information;
and displaying the focus characteristic value and the BI-RADS grading information in the focus area based on the current marking information, the focus characteristic value of the focus area in the current detection frame and the BI-RADS grading information.
4. The AI-assisted detection method of breast three-dimensional volumetric ultrasound according to claim 3, further comprising:
after the current marking information is obtained, displaying an auxiliary measuring line of the current detection frame;
and correcting the auxiliary measuring line according to the adjusting instruction of the auxiliary measuring line, and performing auxiliary measurement after the correction is determined to be completed.
5. The AI auxiliary detection method of breast three-dimensional volumetric ultrasound according to claim 1, wherein said detecting a three-dimensional scan image through AI to obtain a lesion region of the three-dimensional scan image comprises:
and detecting the three-dimensional scanning image through AI, and acquiring a lesion characteristic value in the three-dimensional scanning image to determine the lesion area.
6. The AI-assisted detection method of breast three-dimensional volumetric ultrasound according to claim 1, further comprising:
and generating a corresponding AI recommendation value of the BI-RADS based on the BI-RADS classification information for the user to refer to.
7. The AI-assisted detection method of breast three-dimensional volumetric ultrasound according to claim 1, further comprising:
when a report generation instruction is received, generating an auxiliary detection report based on the labeling information, wherein the auxiliary detection report comprises basic information, an ultrasonic image, diagnosis information and suggestion information;
wherein the annotation information comprises shape, orientation, edge, lesion boundary, echo, posterior echo, surrounding tissue, and BI-RADS rating information of the marker location.
8. An AI auxiliary detection system of breast three-dimensional volumetric ultrasound, comprising:
the AI detection module is used for detecting the three-dimensional scanning image through AI to obtain a focus area of the three-dimensional scanning image;
the display module is used for displaying an AI mark on a focus area of the three-dimensional scanning image;
the display module is also used for displaying an image annotation page and acquiring initial annotation information when receiving an image annotation instruction;
the display module is further configured to, when the AI marker is activated, call feature information of a corresponding lesion area based on the category of the initial labeling information, and display the feature information in the lesion area;
wherein the characteristic information comprises a lesion characteristic value of the lesion area and BI-RADS grading information.
9. The AI-assisted detection system of breast three-dimensional volumetric ultrasound according to claim 8, further comprising a generation module for:
and generating a cross-sectional image of the focus region based on the size of the focus characteristic value and the corresponding focus region, so that the user can select the focus region to be marked.
10. The AI-assisted detection system of breast three-dimensional volumetric ultrasound according to claim 8, further comprising an auxiliary measurement module for:
when the auxiliary measurement of the cross-section image is triggered, acquiring a current detection frame of the focus area;
after the current detection frame is obtained, displaying an image annotation page and obtaining current annotation information;
and displaying the focus characteristic value and the BI-RADS grading information in the focus area based on the current marking information, the focus characteristic value of the focus area in the current detection frame and the BI-RADS grading information.
CN202111623489.4A 2021-12-28 2021-12-28 AI auxiliary detection method and system for breast three-dimensional volume ultrasound Active CN114305505B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111623489.4A CN114305505B (en) 2021-12-28 2021-12-28 AI auxiliary detection method and system for breast three-dimensional volume ultrasound

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111623489.4A CN114305505B (en) 2021-12-28 2021-12-28 AI auxiliary detection method and system for breast three-dimensional volume ultrasound

Publications (2)

Publication Number Publication Date
CN114305505A true CN114305505A (en) 2022-04-12
CN114305505B CN114305505B (en) 2024-04-19

Family

ID=81015930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111623489.4A Active CN114305505B (en) 2021-12-28 2021-12-28 AI auxiliary detection method and system for breast three-dimensional volume ultrasound

Country Status (1)

Country Link
CN (1) CN114305505B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109978894A (en) * 2019-03-26 2019-07-05 成都迭迦科技有限公司 A kind of lesion region mask method and system based on three-dimensional mammary gland color ultrasound
CN110931095A (en) * 2018-09-19 2020-03-27 北京赛迈特锐医疗科技有限公司 System and method based on DICOM image annotation and structured report association
WO2020077962A1 (en) * 2018-10-16 2020-04-23 杭州依图医疗技术有限公司 Method and device for breast image recognition
CN111248941A (en) * 2018-11-30 2020-06-09 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic image display method, system and equipment
US20200297284A1 (en) * 2019-03-20 2020-09-24 Siemens Healthcare Limited Cardiac scar detection
CN111724356A (en) * 2020-06-04 2020-09-29 杭州健培科技有限公司 Image processing method and system for CT image pneumonia identification
AU2021102880A4 (en) * 2021-05-27 2021-10-07 Meka, James Stephen PROF A novel system for covid-19 prediction in chest radiography images using hybrid quantum mask r-cnn model
WO2021232320A1 (en) * 2020-05-20 2021-11-25 深圳迈瑞生物医疗电子股份有限公司 Ultrasound image processing method and system, and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110931095A (en) * 2018-09-19 2020-03-27 北京赛迈特锐医疗科技有限公司 System and method based on DICOM image annotation and structured report association
WO2020077962A1 (en) * 2018-10-16 2020-04-23 杭州依图医疗技术有限公司 Method and device for breast image recognition
CN111248941A (en) * 2018-11-30 2020-06-09 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic image display method, system and equipment
US20200297284A1 (en) * 2019-03-20 2020-09-24 Siemens Healthcare Limited Cardiac scar detection
CN109978894A (en) * 2019-03-26 2019-07-05 成都迭迦科技有限公司 A kind of lesion region mask method and system based on three-dimensional mammary gland color ultrasound
WO2021232320A1 (en) * 2020-05-20 2021-11-25 深圳迈瑞生物医疗电子股份有限公司 Ultrasound image processing method and system, and computer readable storage medium
CN111724356A (en) * 2020-06-04 2020-09-29 杭州健培科技有限公司 Image processing method and system for CT image pneumonia identification
AU2021102880A4 (en) * 2021-05-27 2021-10-07 Meka, James Stephen PROF A novel system for covid-19 prediction in chest radiography images using hybrid quantum mask r-cnn model

Also Published As

Publication number Publication date
CN114305505B (en) 2024-04-19

Similar Documents

Publication Publication Date Title
JP6799146B2 (en) Digital pathology system and related workflows to provide visualized slide-wide image analysis
CN109741346B (en) Region-of-interest extraction method, device, equipment and storage medium
US8442280B2 (en) Method and system for intelligent qualitative and quantitative analysis of digital radiography softcopy reading
JP6506769B2 (en) System and method for generating and displaying tomosynthesis image slabs
CN109241967B (en) Thyroid ultrasound image automatic identification system based on deep neural network, computer equipment and storage medium
JP6091137B2 (en) Image processing apparatus, image processing system, image processing method, and program
CN110059697B (en) Automatic lung nodule segmentation method based on deep learning
US7945083B2 (en) Method for supporting diagnostic workflow from a medical imaging apparatus
CN102138827B (en) Image display device
US8634611B2 (en) Report generation support apparatus, report generation support system, and medical image referring apparatus
US6697506B1 (en) Mark-free computer-assisted diagnosis method and system for assisting diagnosis of abnormalities in digital medical images using diagnosis based image enhancement
KR20140093359A (en) User interaction based image segmentation apparatus and method
CN111430014B (en) Glandular medical image display method, glandular medical image interaction method and storage medium
CN101203170A (en) System and method of computer-aided detection
CN1378677A (en) Method and computer-implemented procedure for creating electronic multimedia reports
US20180293794A1 (en) System providing companion images
CN110580948A (en) Medical image display method and display equipment
CN111583385A (en) Personalized deformation method and system for deformable digital human anatomy model
CN116779093B (en) Method and device for generating medical image structured report and computer equipment
CN114305505A (en) AI auxiliary detection method and system for breast three-dimensional volume ultrasound
CN114388105A (en) Pathological section processing method and device, computer readable medium and electronic equipment
US20230098785A1 (en) Real-time ai for physical biopsy marker detection
JP2023530070A (en) Systems and methods for processing electronic images to generate tissue map visualizations
CN113704650A (en) Information display method, device, system, equipment and storage medium
CN112819925A (en) Method and device for processing focus labeling, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant