CN111383328A - 3D visualization method and system for breast cancer focus - Google Patents

3D visualization method and system for breast cancer focus Download PDF

Info

Publication number
CN111383328A
CN111383328A CN202010125871.1A CN202010125871A CN111383328A CN 111383328 A CN111383328 A CN 111383328A CN 202010125871 A CN202010125871 A CN 202010125871A CN 111383328 A CN111383328 A CN 111383328A
Authority
CN
China
Prior art keywords
breast
breast cancer
information
file
focus
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010125871.1A
Other languages
Chinese (zh)
Other versions
CN111383328B (en
Inventor
钱步月
李安
胡师尧
刘璇
魏煜华
韩昊辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202010125871.1A priority Critical patent/CN111383328B/en
Publication of CN111383328A publication Critical patent/CN111383328A/en
Application granted granted Critical
Publication of CN111383328B publication Critical patent/CN111383328B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • Medical Informatics (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computer Graphics (AREA)
  • Computing Systems (AREA)
  • Primary Health Care (AREA)
  • Artificial Intelligence (AREA)
  • Epidemiology (AREA)
  • Computer Hardware Design (AREA)
  • Architecture (AREA)
  • Geometry (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses a 3D visualization method and a system for breast cancer focus, which comprises the following steps: uploading the acquired medical image files of the breast cancer according to the guidance and the prompt of the file uploader; obtaining breast outline information points and breast focus region information points through analysis of a convolutional neural network according to an uploaded breast cancer medical image file, further drawing a 3D visual view of the breast outline and 3D medical imaging of the breast cancer focus region through a three.js plug-in, mainly performing moistening treatment on data points and edge information points of the focus region, and simultaneously adding an auxiliary analysis control to further analyze the 3D imaging. The invention can quickly and efficiently visualize the medical image of the breast of the patient, accurately position and analyze the focus area.

Description

3D visualization method and system for breast cancer focus
Technical Field
The invention belongs to the technical field of breast cancer focus visualization, and particularly relates to a 3D visualization method and system for breast cancer focuses.
Background
The breast cancer is the cancer with the highest incidence rate in Chinese women and the sixth cause of cancer death, and the new breast cancer number and the death number in China each year account for 12.2 percent and 9.6 percent of the whole world respectively. Since the 90 s, the incidence rate of breast cancer in China has increased more than twice as fast as that of the whole world, and the urban area is particularly remarkable.
At present, the following problems mainly exist in the conventional imaging equipment in the aspect of breast cancer detection imaging: the overlapping of the breast gland and the lesion tissue causes the identification of the position of the focus area of the breast cancer to be inaccurate; blurring the appearance of the morphology, the edges, and their distinction from the surrounding tissue structure of the focal region; there is a lack of a sophisticated system for online management of lesion imaging in breast cancer in patients.
In summary, a new 3D visualization method and system for breast cancer lesions are needed.
Disclosure of Invention
The invention aims to provide a 3D visualization method and a system for breast cancer focuses, so as to solve one or more of the technical problems of unclear region position, morphological edge blurring and imaging management difficulty of the traditional breast cancer focuses. The invention can quickly and efficiently visualize the medical image of the breast of the patient, accurately position and analyze the focus area.
In order to achieve the purpose, the invention adopts the following technical scheme:
the invention discloses a 3D visualization method for breast cancer focuses, which comprises the following steps:
step 1, uploading collected breast cancer medical image files according to guidance and prompt of a file uploader;
step 2, training the convolutional neural network, including: (1) taking a breast cancer medical image file as input, and taking breast tissue information as output information; the output information is breast contour information and information of a focus area in the breast, and the information output form is a relative coordinate value and a weight value of a structural tissue dense point in the breast; (2) training a pre-constructed convolutional neural network by using a breast cancer medical image file with a preset amount of labels, and updating parameters in the convolutional neural network in an iterative training process; stopping training when the loss function is minimum and the accuracy is balanced, and quantifying the parameter value to obtain a trained convolutional neural network; (3) testing the trained convolutional neural network by using a breast cancer mapping file with a preset amount of marks, and preventing the convolutional neural network from being over-fitted and under-learned to obtain a trained convolutional neural network;
step 3, inputting the breast cancer medical image file in the step 1 into the convolutional neural network trained in the step 2 to obtain breast contour information and information of a focus area in the breast; wherein the breast contour information includes dense coordinate points about a breast contour, and the information of the lesion area in the breast includes a relative position coordinate point of the lesion under the breast contour and a coordinate point of lesion edge information;
and 4, constructing 3D visualization breast area imaging through a three.js plug-in according to the breast contour information obtained in the step 3 and information of the focus area in the breast.
The invention is further improved in that in step 1, the file uploader designing step includes:
s101, dividing uploading categories into a left breast image uploading category and a right breast image uploading category, and defining a 3D visual target;
s102, adding secondary uploading conditions and rules, including: setting the type of an uploaded file and the size of the uploaded file; adding two file selection modes of file local hierarchy selection and local dragging selection; adding two uploading modes of single file uploading and batch uploading of a plurality of selected files;
s103, adding a data import queue to clarify an upload state and an upload result, wherein the steps comprise: the file import queue comprises: importing the category, name, size and state of the file; wherein, the status of the imported file is represented by an icon identifier;
s104, adding a file queue manager to process a file uploading state, wherein the file uploading state comprises the following steps: the method comprises the steps of operating the file, displaying the file uploading state and displaying the uploading progress.
The invention has the further improvement that in the step 2, the structure of the convolutional neural network comprises a plurality of layers of neural networks, each layer consists of a plurality of two-dimensional spaces, each two-dimensional space consists of a plurality of independent neurons, and the whole network structure comprises an input layer, a feature extraction layer, a feature mapping layer and a feedforward neural network layer; the input of the input layer is a breast cancer medical image file, and the breast cancer medical image file is converted into a three-dimensional matrix with uniform size by extracting image information in the file;
in the feature extraction layers, the input of the first feature extraction layer is a three-dimensional matrix with the same size as the input layer, and the input of the other feature extraction layers is the output of the feature mapping layer; the output of each feature extraction layer is a three-dimensional feature mapping chart, and the specific dimension is determined according to the convolution calculation result of each layer; each feature extraction layer is composed of a plurality of neurons, the input of each neuron is connected with the receptive field of the previous layer, and the receptive field is the result of inner product of a self-set filter matrix and an input three-dimensional matrix; extracting local features of the three-dimensional matrix by each neuron through multiple convolution operations;
the input of the feature mapping layer is a feature mapping graph output by the feature extraction layer, the output of the last feature mapping layer is a one-dimensional feature vector, and the outputs of the rest feature mapping layers are new feature mapping graphs generated after a series of operations; wherein the step of generating a new feature map comprises: the feature mapping layer sums the four pixels in each group in the feature mapping graph, weights the four pixels, biases the four pixels, and obtains a new feature mapping graph through a Sigmoid function;
the input of the feedforward neural network layer is the one-dimensional feature vector output by the last feature mapping layer, and the output is the breast contour information about the breast cancer medical image and the information of the focus area in the breast.
The invention has the further improvement that the step 3 specifically comprises the following steps:
s301, inputting the breast cancer medical image file into a convolutional neural network;
s302, analyzing the medical image file of the breast cancer by the convolutional neural network, determining the outer contour information of the breast, the position, the size and the edge information of a focus, and filling the missing information of the focus;
s303, outputting the breast contour information and the information of the focus area in the breast by the convolutional neural network; the output information specifically includes: the breast generates a dense breast contour coordinate point, a relative position coordinate point of the lesion under the breast contour and a lesion edge information coordinate point.
The invention has the further improvement that the step 4 specifically comprises the following steps:
s401, calculating to obtain the maximum outer contour central point of the breast according to the dense coordinate points of the breast contour; drawing to obtain a space coordinate system by taking the maximum outer contour central point of the breast as a coordinate origin;
s402, drawing a hierarchical breast outer contour 3D view according to the dense coordinate points of the breast contour through a three.js plug-in;
s403, drawing a 3D view of the lesion under the breast outer contour according to the relative position coordinate point of the lesion through the three.js plug-in; and moistening the focus edge through the focus edge information coordinate point.
The invention further improves the method and also comprises the following steps:
and 5, adding an auxiliary analysis control for auxiliary analysis of 3D visualization breast area imaging, wherein the auxiliary analysis control comprises the following steps:
s501, adding iso and mip for 3D visualization breast area imaging display;
s502, adding a rotation function for multi-angle viewing analysis 3D visualization breast area imaging;
s503, adding a zooming function for viewing and analyzing 3D visualization breast area imaging in a far and near distance;
s504, adding a filtering control for filtering data points of a focus area in 3D visualization breast area imaging;
and S505, adding a button for displaying the outer contour of the breast.
The invention has the further improvement that the step 4 specifically comprises the following steps: analyzing breast contour information output by the convolutional neural network based on a breast cancer medical image file and information of a focus area in a breast; dense points with weight 0 are breast contour coordinate points; the coordinate weights of other tissue dense points are more than 0 and less than or equal to 1, wherein the weight range of the normal tissue point is 0.01-0.8, and the weight range of the focus tissue point is 0.81-1;
extracting dense coordinate points of the breast contour in the breast contour information, calculating to obtain a maximum outer contour central point of the breast, and drawing a space coordinate system by taking the maximum outer contour central point of the breast as a coordinate origin;
extracting dense coordinate points of the breast contour, and drawing by applying a three.js plug-in based on the coordinates and the weight of the points to obtain a 3D view of the breast contour; wherein, gray point color and a point size of 2 as a reference are applied to all the breast contour coordinate points;
extracting information of a focus area in a breast, and drawing by applying a three.js plug-in based on coordinates and weight of points to obtain a 3D view of the breast cancer focus; wherein, the points of the normal tissue are in a dark color band, the points of the focus tissue are in a bright color band, and the information of all focus areas in the breast is 2;
and taking the focus tissue points with the weight of 0.81-0.85 as focus tissue boundary points, and applying a moistening effect to the focus tissue boundary points.
The invention further improves the method and also comprises the following steps:
and 6, generating a breast cancer diagnosis report, which comprises the following steps:
s601, intercepting a 2D view of a breast cancer focus according to 3D visual breast area imaging, uploading a breast cancer medical image file, and uploading a self-defined breast cancer medical image picture;
s602, extracting header data according to the uploaded breast cancer medical image file, accessing a database to search and match to obtain basic information of the patient and the image, wherein the basic information comprises: CTLM instrument model, patient examination time, patient name, patient age, patient sex, examination part, patient ID, image picture ID number, and current time is used as report generation time;
s603, filling diagnosis opinions according to 3D visual breast area imaging and a self-defined breast cancer medical image picture;
s604, filling data in a default diagnosis report template by a standard data structure and generating a breast cancer diagnosis report of the patient.
The invention discloses a 3D visualization system for breast cancer focus, which comprises:
the file uploading module is used for uploading the acquired breast cancer medical image files according to the guidance and prompt of the file uploader;
the breast cancer focus 3D visualization module is used for obtaining breast outline information points and breast focus region information points through convolutional neural network analysis according to the uploaded breast cancer medical image files; the training of the convolutional neural network comprises the following steps: (1) taking a breast cancer medical image file as input, and taking breast tissue information as output information; the output information is breast contour information and information of a focus area in the breast, and the information output form is a relative coordinate value and a weight value of a structural tissue dense point in the breast; (2) training a pre-constructed convolutional neural network by using a breast cancer medical image file with a preset amount of labels, and updating parameters in the convolutional neural network in an iterative training process; stopping training when the loss function is minimum and the accuracy is balanced, and quantifying the parameter value to obtain a trained convolutional neural network; (3) testing the trained convolutional neural network by using a breast cancer mapping file with a preset amount of marks, and preventing the convolutional neural network from being over-fitted and under-learned to obtain a trained convolutional neural network;
the file uploading module is used for uploading the medical image file of the breast cancer to the trained convolutional neural network to obtain breast contour information and information of a focus area in the breast; wherein the breast contour information includes dense coordinate points about a breast contour, and the information of the lesion area in the breast includes a relative position coordinate point of the lesion under the breast contour and a coordinate point of lesion edge information;
for constructing 3D visualization breast area imaging by three.js plug-in based on the obtained breast contour information and information of the lesion area in the breast.
The invention further improves the method and also comprises the following steps:
the diagnosis report generation module is used for generating a breast cancer diagnosis report of the patient according to the basic information of the patient and the breast cancer 3D imaging result; wherein the diagnosis report takes a hospital standard diagnosis report as a template;
a diagnostic report management module for managing breast cancer diagnostic reports, comprising: all diagnostic reports are presented in a list format, providing an interface to view, download and delete breast cancer diagnostic reports.
Compared with the prior art, the invention has the following beneficial effects:
according to the method, firstly, the user is enabled to accurately position the breast medical image to be analyzed through the guidance of the file uploading device, and the problem that a large amount of iterative selections are needed in a selection stage because the query target of the user is not clear can be solved; and then analyzing the breast cancer medical image through a trained convolutional neural network, and outputting breast contour information based on the breast cancer medical image file and information of a focus area in the breast. Secondly, visualizing the coordinate points of the breast contour information and the coordinate points of the information of the focus area in the breast to obtain a 3D view of the breast contour and a 3D view of the focus area in the breast so as to solve the problem of unclear breast cancer focus position, and simultaneously, lubricating the breast cancer focus edge information so as to solve the problem that the focus area is fuzzy so that the focus condition is difficult to understand visually. The invention can quickly and efficiently visualize the medical image of the breast of the patient, accurately position and analyze the focus area.
In the convolutional neural network, a feature mapping structure adopts a Sigmoid function with small influence function kernel as an activation function of the convolutional network, so that the feature mapping has displacement invariance; the feedforward neural network is composed of a plurality of layers of neurons, the neurons between two adjacent layers are in a full connection relation, implicit parameters capable of stably identifying input one-dimensional feature vectors are obtained through multiple times of training, information points in the breast cancer medical image are reconstructed, and missing information points in the breast cancer medical image are filled.
According to the invention, the rich interaction control for analyzing the breast cancer 3D imaging result is provided, so that a user can analyze and check breast cancer 3D imaging in multiple directions and multiple angles, and the problem that the information checking and analyzing of the focus area is incomplete due to distance and angle problems can be solved.
According to the system, the breast cancer focus 3D visualization module, the breast cancer diagnosis report generation module and the breast cancer diagnosis report management module are highly connected to form a vertical frame from the breast medical image to the diagnosis report, so that a user can select the breast medical image to be analyzed independently according to the requirement, visually check the 3D visualization view of the selected medical image and analyze the position edge information of the breast cancer focus from multiple angles, and simultaneously generate the breast cancer diagnosis report of the patient and perform multi-mode management on the breast cancer diagnosis report to help the user understand the breast cancer condition of the patient.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art are briefly introduced below; it is obvious that the drawings in the following description are some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic block flow diagram of a 3D visualization method for breast cancer lesions according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a convolutional neural network structure in an embodiment of the present invention;
FIG. 3 is a schematic diagram of the result of 3D breast cancer imaging in an embodiment of the present invention; wherein (a) in fig. 3 is a schematic diagram of left breast 3D breast imaging of a patient, and (b) in fig. 3 is a schematic diagram of right breast 3D breast imaging of a patient;
FIG. 4 is a schematic diagram of a 3D visualization system for breast cancer lesions, in accordance with an embodiment of the present invention;
FIG. 5 is a schematic block diagram of a flowchart of a method for generating a breast cancer diagnosis report according to an embodiment of the present invention.
Detailed Description
In order to make the purpose, technical effect and technical solution of the embodiments of the present invention clearer, the following clearly and completely describes the technical solution of the embodiments of the present invention with reference to the drawings in the embodiments of the present invention; it is to be understood that the described embodiments are only some of the embodiments of the present invention. Other embodiments, which can be derived by one of ordinary skill in the art from the disclosed embodiments without inventive faculty, are intended to be within the scope of the invention.
The embodiment of the invention provides a 3D visualization method for breast cancer focuses, which comprises the following steps:
s1, guiding and prompting the user to upload the breast cancer medical image file of the patient according to the file uploading device;
s2, according to the breast cancer medical image file obtained in the step S1, the breast contour information of the patient and the specific information of the focus area in the breast of the patient are obtained through the analysis of a convolutional neural network;
s3, constructing 3D visualization breast area imaging of the patient according to the contour information of the breast of the patient and the lesion area information obtained in the S2;
and S4, adding an auxiliary analysis control to further analyze the 3D imaging.
Preferably, the method for designing the file uploader in step S1 includes the steps of:
s101, dividing uploading categories into two categories to clearly define a 3D visualization target;
s102, adding secondary uploading conditions and rules;
s103, adding a data import queue to clarify an uploading state and an uploading result;
and S104, adding a file queue manager to process the file uploading state.
Preferably, in step S2, the method for generating data point information of the breast contour and the breast cancer focus according to the medical image file of breast cancer obtained in step S1 specifically includes the following steps:
s201, using the obtained breast cancer medical image file as the input of a convolutional neural network;
s202, analyzing the medical breast cancer image file by the convolutional neural network, determining the outer contour information of the breast, the position, the size and the edge information of a focus, and filling missing information of the focus;
s203, inputting the breast cancer medical image file into a convolutional neural network, and outputting breast contour information and information of a focus region in a breast, wherein the breast contour information specifically comprises breast generated breast contour dense coordinate points, relative position coordinate points of a focus under a breast contour and focus edge information coordinate points.
In the embodiment of the invention, the structure of the convolutional neural network comprises a plurality of layers of neural networks, each layer consists of a plurality of two-dimensional spaces, each two-dimensional space consists of a plurality of independent neurons, and the whole network structure comprises an input layer, a feature extraction layer, a feature mapping layer and a feedforward neural network layer; the input of the input layer is a breast cancer medical image file, and the breast cancer medical image file is converted into a three-dimensional matrix with uniform size by extracting image information in the file; in the feature extraction layers, the input of the first feature extraction layer is a three-dimensional matrix with uniform size, and the input of the other feature extraction layers is the output of the feature mapping layer; the output of each feature extraction layer is a three-dimensional feature mapping chart, and the specific dimension is determined according to the convolution calculation result of each layer; each feature extraction layer is composed of a plurality of neurons, the input of each neuron is connected with the receptive field of the previous layer, and the receptive field is the result of inner product of a self-set filter matrix and an input three-dimensional matrix; each neuron extracts the local features of the three-dimensional matrix through multiple convolution operations, and in the process, due to the inner product operation, once one local feature is extracted, the position relation between the local feature and other features is also determined; the input of the feature mapping layer is a feature mapping graph output by the feature extraction layer, the output of the last layer is a one-dimensional feature vector, and the outputs of the other layers are new feature mapping graphs generated after a series of operations. The main operation is that the feature mapping layer sums up four pixels in each group in the feature mapping graph, weights the four pixels, adds offset, and obtains a new feature mapping graph through a Sigmoid function. In the convolutional neural network, a Sigmoid function with a small influence function kernel is adopted as an activation function of the convolutional network in a feature mapping structure, so that the feature mapping has displacement invariance. The input of the feedforward neural network layer is the one-dimensional feature vector output by the last feature mapping layer, and the output is the breast contour information about the breast cancer medical image and the information of the focus area in the breast. The feedforward neural network is composed of a plurality of layers of neurons, and the neurons between two adjacent layers are in a full connection relation. Implicit parameters capable of stably identifying input one-dimensional feature vectors are obtained through multiple times of training, information points in the breast cancer medical image are reconstructed, and missing information points in the breast cancer medical image are filled.
Preferably, in step S3, generating 3D visualized medical imaging according to the breast contour and the data point information of the breast cancer focus obtained in step S2 specifically includes the following steps:
s301, drawing a space coordinate system by taking the maximum outer contour central point of the breast as a coordinate origin;
s302, drawing a hierarchical breast outer contour 3D view according to the breast contour dense point coordinates through a three.js plug-in;
and S303, drawing a 3D view of the focus under the breast outline according to the coordinate point of the relative position of the focus through the three.js plug-in, and lubricating the focus edge through the coordinate point of the focus edge information to further define the position of the focus.
Preferably, in step S4, adding an auxiliary analysis control to further analyze the breast cancer 3D imaging, specifically including the following steps:
s401, adding two 3D imaging display modes of iso and mip;
s402, add rotation function 360. Multi-angle viewing analysis 3D imaging;
s403, adding a zooming function to view and analyze the 3D imaging in the near and far distance;
s404, adding a filtering control to filter data points of a focus area;
s405, adding a button for displaying the outer contour of the breast.
According to the method, firstly, the user is enabled to accurately position the breast medical image to be analyzed through the guidance of the file uploading device, and the problem that a large amount of iterative selections are needed in a selection stage because the query target of the user is not clear can be solved; secondly, the breast medical image is visualized through the breast cancer 3D imaging result, so that the problem that the focus condition is difficult to understand visually due to unclear focus position and fuzzy focus edge information can be solved; and finally, providing a rich interaction control for analyzing the breast cancer 3D imaging result, so that a user can analyze and check breast cancer 3D imaging in multiple directions and multiple angles, and the problem that the information checking and analyzing of the focus area is incomplete due to the problems of distance and angle can be solved. The invention can quickly and efficiently visualize the medical image of the breast of the patient, accurately position and analyze the focus area.
The embodiment of the invention provides a 3D visualization system for breast cancer focus, which comprises:
the file uploading module is used for uploading the acquired breast cancer medical image files according to the guidance and prompt of the file uploader;
the breast cancer focus 3D visualization module is used for obtaining breast outline information points and breast focus region information points through analysis of a convolutional neural network according to a breast cancer medical image file uploaded by a user, further drawing a 3D visualization view of the breast outline and 3D medical imaging of the breast cancer focus region through a three.js plug-in, mainly performing lubrication processing on data points and edge information points of the focus region, and meanwhile adding an auxiliary analysis control to further analyze the 3D imaging.
And the diagnosis report generating module is used for generating a breast cancer diagnosis report of the patient according to the basic information of the patient and the breast cancer 3D imaging result. The standard diagnosis report of the hospital is used as a template, and a user can reconstruct the diagnosis report in a self-defining way according to the actual condition. The module provides a visualization interface for 3D imaging of patient breast cancer to assist the user in reviewing the breast cancer lesion area while providing the ability to autonomously modify the diagnostic report.
And the diagnosis report management module is used for managing the breast cancer diagnosis report of each patient in a standard mode. The diagnostic reports for all patients are presented in a list format while providing an interface to view, download and delete the patient breast cancer diagnostic reports.
According to the system, the file uploading module, the breast cancer focus 3D visualization module, the breast cancer diagnosis report generating module and the breast cancer diagnosis report management module are highly connected to form a vertical frame from the breast medical image to the diagnosis report, so that a user can select the breast medical image to be analyzed independently according to the requirement, visually check the 3D visualization view of the selected medical image and analyze the position edge information of the breast cancer focus from multiple angles, and simultaneously generate the breast cancer diagnosis report of the patient and perform multi-mode management on the breast cancer diagnosis report to help the user understand the breast cancer condition of the patient.
Referring to fig. 1, a 3D visualization method for breast cancer focus according to the present invention includes the following steps:
and S1, uploading the breast cancer medical image file.
The file uploading device guides and prompts a user to upload a breast cancer medical image file of a patient and accurately positions a breast medical image to be analyzed, and the method comprises the following specific steps:
s101, the uploading category is divided into a left breast image uploading category and a right breast image uploading category, and the breast cancer medical image category needing to be analyzed is determined.
And S102, adding secondary uploading conditions and rules, wherein the secondary uploading conditions and rules comprise setting the type and the size of an uploaded file, adding two file selection modes of local hierarchy selection and local dragging selection of the file, and adding two specific uploading modes of single uploading of the file and batch uploading of a plurality of selected files.
S103, adding a file import queue to clarify a file import state and an import result, wherein the file import queue comprises the type of an import file, such as 'left breast image'; imported file names, such as "Scanned Breast190102.1016. txt"; the size of the import file, e.g., "0.291 MB"; the file import state comprises three import states of 'file is importing', 'file import is successful' and 'file import is failed', and the icon identifier represents the file.
S104, adding a file queue manager to process a file uploading state, wherein the file queue manager comprises file operation, such as 'uploading an imported file', 'deleting the imported file', 'cancelling the file uploading in the file uploading process'; displaying file uploading states, such as 'file uploading', 'file uploading success' and 'file uploading failure', and representing three uploading states of the file by icon identifiers; the uploading progress is displayed, and the progress percentage of the uploaded files is displayed by a dynamic progress bar, such as '80%'.
And S2, analyzing the breast cancer medical image file by the convolutional neural network, and analyzing the breast cancer medical image file obtained in the S1 by the convolutional neural network to obtain the breast contour information of the patient and the specific information of the focus area in the breast of the patient. The method comprises the following specific steps:
s201, taking a breast cancer medical image file as input, taking breast tissue information as output, wherein the output information is breast contour information and information of a focus area in a breast, and the information output form is a relative coordinate value and a weight value of a structural tissue dense point in the breast;
referring to fig. 2, fig. 2 is a schematic structural diagram of a convolutional neural network according to an embodiment of the present invention, which is specifically explained as follows:
the structure of the convolutional neural network comprises a plurality of layers of neural networks, each layer of neural networks is composed of a plurality of two-dimensional spaces, each two-dimensional space is composed of a plurality of independent neurons, and the whole network structure mainly comprises an input layer, a feature extraction layer, a feature mapping layer and a feedforward neural network layer, and particularly comprises the following steps:
the input of the input layer is a breast cancer medical image file, and the breast cancer medical image file is converted into a three-dimensional matrix with uniform size by extracting image information in the file, wherein the size of the matrix is 32 x 3.
The input of the first feature extraction layer is a three-dimensional matrix with the size of 32 x 3, the input of the rest feature extraction layers is the output of the feature mapping layer, the output of each feature extraction layer is a three-dimensional feature mapping graph, and the specific dimension is determined according to the convolution calculation result of each layer. Each feature extraction layer is composed of a plurality of neurons, the input of each neuron is connected with the receptive field of the previous layer, the receptive field is the result of inner product of a self-set filter matrix and an input three-dimensional matrix, and the dimension of the filter matrix in the network is 3 x 3. Each neuron extracts the local features of the three-dimensional matrix through a plurality of convolution operations, and in the process, once one local feature is extracted, the position relation between the local feature and other features is determined due to the inner product operation.
The input of the feature mapping layer is a feature mapping graph output by the feature extraction layer, the output of the last layer is a one-dimensional feature vector, and the outputs of the other layers are new feature mapping graphs generated after a series of operations. The main operation is that the feature mapping layer sums up four pixels in each group in the feature mapping graph, weights the four pixels, adds offset, and obtains a new feature mapping graph through a Sigmoid function. In the convolutional neural network, a Sigmoid function with a small influence function kernel is adopted as an activation function of the convolutional network in a feature mapping structure, so that the feature mapping has displacement invariance.
The input of the feedforward neural network layer is the one-dimensional feature vector output by the last feature mapping layer, and the output is the breast contour information about the breast cancer medical image and the information of the focus area in the breast. The feedforward neural network is composed of a plurality of layers of neurons, and the neurons between two adjacent layers are in a full connection relation. Implicit parameters capable of stably identifying input one-dimensional feature vectors are obtained through multiple times of training, information points in the breast cancer medical image are reconstructed, and missing information points in the breast cancer medical image are filled.
S202, pre-training a convolutional neural network by using a large number of breast cancer medical image files with labels, updating parameters in the convolutional neural network in an iterative training process, and stopping training the quantitative parameter values when the loss function is minimum and the accuracy is balanced;
s203, testing the convolutional neural network by using a small amount of breast cancer mapping files with labels to prevent the convolutional neural network from being over-fitted and under-learned.
And S3, generating breast contour dense point information and breast cancer focus data point information. Analyzing breast related tissue information output by the convolutional neural network in S2 and based on the breast cancer medical image file, wherein the dense point with the weight of 0 is a coordinate point of the breast contour, the coordinate weights of other dense points of the tissue are 0-1, the weight range of the normal tissue point is 0.01-0.8, the larger the weight is, the more glands are in the position, the weight range of the focus tissue point is 0.81-1, and the larger the weight is, the higher the probability of being the focus area is.
And S4, drawing a 3D view of the breast contour and a 3D view of the breast cancer focus according to the breast contour coordinate point and the breast tissue coordinate point obtained in the S3. The method comprises the following specific steps:
and S401, extracting the coordinate point of the breast contour in the S3, calculating to obtain the central point of the maximum outer contour, and drawing a space coordinate system by taking the point as the coordinate origin.
S402, extracting the breast contour coordinate points in S3, drawing a 3D view of the breast contour based on the coordinates and weights of the points using three.js plug-in, applying a gray-white point color and a point size of 2 as a reference to all the breast contour coordinate points.
And S403, extracting the breast tissue information points in the S3, drawing a 3D view of the breast cancer focus based on the coordinates and the weight of the points by applying a three.js plug-in, applying a dark color band to the points (the weight is 0.01-0.8) of the normal tissue, and applying a bright color band to the points (the weight is 0.81-1) of the focus tissue, wherein the size of all the breast tissue information points is 2.
S404, applying a moistening effect to the boundary points (with the weight of 0.81-0.85) of the lesion tissues.
S5, adding an auxiliary analysis control for further analysis of breast cancer 3D imaging, specifically comprising the following steps:
s501, adding two 3D imaging modes of iso and mip. In the iso imaging mode, a 3D view is composed of physical coordinate points with a reference of 2, with the main goal of analyzing the breast hierarchy; in the mip imaging mode, a 3D view is composed of 2-base imaginary coordinate points, and the whole imaging has perspective effect to look at the relative position of the analyzed lesion in the breast as the main target.
S502, add rotation function 360. Multi-angle view analysis 3D imaging. Clicking any point in the 3D view to control the direction will rotate the breast 3D imaging around the origin as the center to the controlled direction.
And S503, adding a zooming function to view and analyze the 3D imaging at near distance. Sliding the mouse roller in a 3D attempt zooms in and out the 3D imaging of the breast.
And S504, adding a filtering control to filter data points of the focus area. And controlling the breast lesion area displayed by 3D imaging by adding 0.01-1 of the holder bar, for example, when the holder bar is pulled to 0.5, 0.01-0.5 of the holder bar displays gray representing the shielded state, meanwhile, the system filters out data points with the weight of 0.01-0.5, only displays coordinate points with the weight of 0.5-1, and initially draws the coordinate points with all weights by default.
And S505, adding a button for displaying the outer contour of the breast. The system has default to select a rendered breast outline, and when the unselected breast outline is clicked again, the system will mask out the breast outline and only show the breast lesion area.
Referring to fig. 3, fig. 3 is a schematic diagram of the 3D imaging result of breast cancer according to the present invention, which is specifically explained as follows:
fig. 3 is a 3D breast image of the left and right breast of the same patient. In an overall view, the 3D visualization view of the breast cancer of the patient consists of three parts, namely a background, a spatial coordinate system and breast cancer imaging. Wherein, the background is paved by a black system to highlight the breast imaging effect. The space coordinate system takes the maximum outline center point of the breast as the coordinate origin. The character identification of the three direction lines of the coordinate system changes with the category of the uploaded breast image file, wherein the character identification of the three direction lines under the left breast category is A, M, S, the character identification of the three direction lines under the right breast category is A, L, S, and the initial directions of the three lines under the two categories are outward, rightward and upward in sequence. The color designation of the three directional lines of the coordinate system is unchanged, with a first (optionally blue) sign pointing initially outward, a second (optionally green) sign pointing initially upward, and a third (optionally red) sign pointing initially rightward. The breast cancer imaging is composed of two parts, namely a breast outer contour view and a breast cancer focus view, wherein the breast outer contour view displays the shape of the breast of a patient in a hierarchical structure, and gray-white identification coordinate points clarify the difference between the breast outer contour and the breast cancer focus. The breast cancer focus view emphasizes the relative position of the focus in the breast, the edge of the focus is moistened to determine the size and shape of the focus, and different color gamut marks coordinate points with different weights determine the difference between the focus and healthy tissues.
Referring to fig. 4, fig. 4 is an interaction diagram of modules in a 3D visualization system for breast cancer focus according to the present invention, specifically:
the file uploading module is used for uploading the acquired breast cancer medical image files according to the guidance and prompt of the file uploader;
the breast cancer 3D visualization module is used for obtaining breast outline information points and breast focus region information points through analysis of a convolutional neural network according to a breast cancer medical image file uploaded by a user, drawing a 3D visualization view of the breast outline and 3D medical imaging of the breast cancer focus region through a three.js plug-in, mainly performing lubrication processing on data points and edge information points of the focus region, and meanwhile adding an auxiliary analysis control to further analyze the 3D imaging. The module is singly bound with a breast cancer diagnosis report generation module, specifically, 3D imaging data of breast cancer in the module flows to the breast cancer diagnosis report generation module in a one-way mode, and a 2D view of a breast cancer focus under each position is obtained by intercepting a 3D imaging view of the breast cancer in the diagnosis report generation module in multiple angles.
And the diagnosis report generation module is used for generating a patient-defined breast cancer diagnosis report according to the basic information of the patient and the breast cancer 3D imaging result. The diagnosis report includes patient name, sex, age, examination part, patient ID number, breast cancer medical image ID number, left breast and right breast 3D imaging screenshots seen by CTML, custom uploaded left breast and right breast image pictures, doctor's diagnosis opinions and the generation time of the diagnosis report. The module provides a visualization interface for 3D imaging of patient breast cancer to assist the user in reviewing the breast cancer lesion area while providing the ability to autonomously modify the diagnostic report. The module is bidirectionally bound with the breast cancer diagnosis report management module, specifically, the diagnosis report generated by the module is subjected to main information extraction and unified management by the breast cancer diagnosis report management module, and the breast cancer diagnosis report management module provides an interface for specifically checking a complete array change diagnosis report.
And the diagnosis report management module is used for managing the breast cancer diagnosis report of each patient in a standard mode. The diagnosis reports of all patients of the system are displayed in a list form, and the table name of each list records basic information of the breast cancer diagnosis report, including the number of the patient, the examination time and the medical image number for examining the breast cancer. An interface is provided for viewing, downloading and deleting breast cancer diagnostic reports from a patient. Controls for searching and filtering breast cancer diagnosis reports are provided, and the controls specifically comprise searching based on patient numbers, searching based on breast cancer medical image numbers and searching based on examination time periods. The module is bi-directionally bound to the breast cancer diagnosis report generation module, as described above.
Referring to fig. 5, fig. 5 is a schematic flow chart of a method for generating a breast cancer diagnosis report according to an embodiment of the present invention, which is mainly designed as follows:
and S1, intercepting a multi-angle 2D view of the breast cancer focus according to the 3D view of the breast cancer, and uploading a 2D image picture of the original breast cancer focus. A user analyzes and looks up a 3D view of the breast cancer of a patient through the auxiliary control, and intercepts a 2D view of the breast cancer in a determined angle direction and the like. While locally selecting the patient's breast cancer medical image for transmission to the system.
S2, the system obtains basic information of the patient. The system extracts file header data according to the uploaded medical image files of the breast cancer of the patient, accesses a database to search and match to obtain basic information of the patient and the image, wherein the basic information comprises the CTLM instrument model, the patient examination time, the patient name, the patient age, the patient sex, the examination part, the patient ID and the image picture ID number, and the current time is used as report generation time.
And S3, the doctor fills in the diagnosis opinions according to the breast cancer view of the patient and the self-defined uploaded medical image picture of the breast cancer.
S4, the system generates a diagnosis report based on the structure of each data object. The system populates the data with standard data structures in a default diagnostic report template and generates a diagnostic report of breast cancer for the patient.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Although the present invention has been described in detail with reference to the above embodiments, those skilled in the art can make modifications and equivalents to the embodiments of the present invention without departing from the spirit and scope of the present invention, which is set forth in the claims of the present application.

Claims (10)

1. A3D visualization method for breast cancer focus is characterized by comprising the following steps:
step 1, uploading collected breast cancer medical image files according to guidance and prompt of a file uploader;
step 2, training the convolutional neural network, including: (1) taking a breast cancer medical image file as input, and taking breast tissue information as output information; the output information is breast contour information and information of a focus area in the breast, and the information output form is a relative coordinate value and a weight value of a structural tissue dense point in the breast; (2) training a pre-constructed convolutional neural network by using a breast cancer medical image file with a preset amount of labels, and updating parameters in the convolutional neural network in an iterative training process; stopping training when the loss function is minimum and the accuracy is balanced, and quantifying the parameter value to obtain a trained convolutional neural network; (3) testing the trained convolutional neural network by using a breast cancer mapping file with a preset amount of marks, and preventing the convolutional neural network from being over-fitted and under-learned to obtain a trained convolutional neural network;
step 3, inputting the breast cancer medical image file in the step 1 into the convolutional neural network trained in the step 2 to obtain breast contour information and information of a focus area in the breast; wherein the breast contour information includes dense coordinate points about a breast contour, and the information of the lesion area in the breast includes a relative position coordinate point of the lesion under the breast contour and a coordinate point of lesion edge information;
and 4, constructing 3D visualization breast area imaging through a three.js plug-in according to the breast contour information obtained in the step 3 and information of the focus area in the breast.
2. The method for 3D visualization of breast cancer focus according to claim 1, wherein in step 1, the file uploader designing step comprises:
s101, dividing uploading categories into a left breast image uploading category and a right breast image uploading category, and defining a 3D visual target;
s102, adding secondary uploading conditions and rules, including: setting the type of an uploaded file and the size of the uploaded file; adding two file selection modes of file local hierarchy selection and local dragging selection; adding two uploading modes of single file uploading and batch uploading of a plurality of selected files;
s103, adding a data import queue to clarify an upload state and an upload result, wherein the steps comprise: the file import queue comprises: importing the category, name, size and state of the file; wherein, the status of the imported file is represented by an icon identifier;
s104, adding a file queue manager to process a file uploading state, wherein the file uploading state comprises the following steps: the method comprises the steps of operating the file, displaying the file uploading state and displaying the uploading progress.
3. The method for 3D visualization facing breast cancer focus according to claim 1, wherein in step 2, the structure of the convolutional neural network comprises a multilayer neural network, each layer is composed of a plurality of two-dimensional spaces, each two-dimensional space is composed of a plurality of independent neurons, and the whole network structure comprises an input layer, a feature extraction layer, a feature mapping layer and a feedforward neural network layer; the input of the input layer is a breast cancer medical image file, and the breast cancer medical image file is converted into a three-dimensional matrix with uniform size by extracting image information in the file;
in the feature extraction layers, the input of the first feature extraction layer is a three-dimensional matrix with the same size as the input layer, and the input of the other feature extraction layers is the output of the feature mapping layer; the output of each feature extraction layer is a three-dimensional feature mapping chart, and the specific dimension is determined according to the convolution calculation result of each layer; each feature extraction layer is composed of a plurality of neurons, the input of each neuron is connected with the receptive field of the previous layer, and the receptive field is the result of inner product of a self-set filter matrix and an input three-dimensional matrix; extracting local features of the three-dimensional matrix by each neuron through multiple convolution operations;
the input of the feature mapping layer is a feature mapping graph output by the feature extraction layer, the output of the last feature mapping layer is a one-dimensional feature vector, and the outputs of the rest feature mapping layers are new feature mapping graphs generated after a series of operations; wherein the step of generating a new feature map comprises: the feature mapping layer sums the four pixels in each group in the feature mapping graph, weights the four pixels, biases the four pixels, and obtains a new feature mapping graph through a Sigmoid function;
the input of the feedforward neural network layer is the one-dimensional feature vector output by the last feature mapping layer, and the output is the breast contour information about the breast cancer medical image and the information of the focus area in the breast.
4. The method for 3D visualization of breast cancer lesions according to claim 3, wherein the step 3 specifically comprises:
s301, inputting the breast cancer medical image file into a convolutional neural network;
s302, analyzing the medical image file of the breast cancer by the convolutional neural network, determining the outer contour information of the breast, the position, the size and the edge information of a focus, and filling the missing information of the focus;
s303, outputting the breast contour information and the information of the focus area in the breast by the convolutional neural network; the output information specifically includes: the breast generates a dense breast contour coordinate point, a relative position coordinate point of the lesion under the breast contour and a lesion edge information coordinate point.
5. The method for 3D visualization of breast cancer lesions according to claim 4, wherein the step 4 specifically comprises:
s401, calculating to obtain the maximum outer contour central point of the breast according to the dense coordinate points of the breast contour; drawing to obtain a space coordinate system by taking the maximum outer contour central point of the breast as a coordinate origin;
s402, drawing a hierarchical breast outer contour 3D view according to the dense coordinate points of the breast contour through a three.js plug-in;
s403, drawing a 3D view of the lesion under the breast outer contour according to the relative position coordinate point of the lesion through the three.js plug-in; and moistening the focus edge through the focus edge information coordinate point.
6. The method for 3D visualization of breast cancer lesion according to claim 1, further comprising:
and 5, adding an auxiliary analysis control for auxiliary analysis of 3D visualization breast area imaging, wherein the auxiliary analysis control comprises the following steps:
s501, adding iso and mip for 3D visualization breast area imaging display;
s502, adding a rotation function for multi-angle viewing analysis 3D visualization breast area imaging;
s503, adding a zooming function for viewing and analyzing 3D visualization breast area imaging in a far and near distance;
s504, adding a filtering control for filtering data points of a focus area in 3D visualization breast area imaging;
and S505, adding a button for displaying the outer contour of the breast.
7. The method for 3D visualization of breast cancer lesions according to claim 1, wherein the step 4 specifically comprises: analyzing breast contour information output by the convolutional neural network based on a breast cancer medical image file and information of a focus area in a breast; dense points with weight 0 are breast contour coordinate points; the coordinate weights of other tissue dense points are more than 0 and less than or equal to 1, wherein the weight range of the normal tissue point is 0.01-0.8, and the weight range of the focus tissue point is 0.81-1;
extracting dense coordinate points of the breast contour in the breast contour information, calculating to obtain a maximum outer contour central point of the breast, and drawing a space coordinate system by taking the maximum outer contour central point of the breast as a coordinate origin;
extracting dense coordinate points of the breast contour, and drawing by applying a three.js plug-in based on the coordinates and the weight of the points to obtain a 3D view of the breast contour; wherein, gray point color and a point size of 2 as a reference are applied to all the breast contour coordinate points;
extracting information of a focus area in a breast, and drawing by applying a three.js plug-in based on coordinates and weight of points to obtain a 3D view of the breast cancer focus; wherein, the points of the normal tissue are in a dark color band, the points of the focus tissue are in a bright color band, and the information of all focus areas in the breast is 2;
and taking the focus tissue points with the weight of 0.81-0.85 as focus tissue boundary points, and applying a moistening effect to the focus tissue boundary points.
8. The method for 3D visualization of breast cancer lesion according to claim 1, further comprising:
and 6, generating a breast cancer diagnosis report, which comprises the following steps:
s601, intercepting a 2D view of a breast cancer focus according to 3D visual breast area imaging, uploading a breast cancer medical image file, and uploading a self-defined breast cancer medical image picture;
s602, extracting header data according to the uploaded breast cancer medical image file, accessing a database to search and match to obtain basic information of the patient and the image, wherein the basic information comprises: CTLM instrument model, patient examination time, patient name, patient age, patient sex, examination part, patient ID, image picture ID number, and current time is used as report generation time;
s603, filling diagnosis opinions according to 3D visual breast area imaging and a self-defined breast cancer medical image picture;
s604, filling data in a default diagnosis report template by a standard data structure and generating a breast cancer diagnosis report of the patient.
9. A system for 3D visualization of breast cancer lesions, comprising:
the file uploading module is used for uploading the acquired breast cancer medical image files according to the guidance and prompt of the file uploader;
the breast cancer focus 3D visualization module is used for obtaining breast outline information points and breast focus region information points through convolutional neural network analysis according to the uploaded breast cancer medical image files; the training of the convolutional neural network comprises the following steps: (1) taking a breast cancer medical image file as input, and taking breast tissue information as output information; the output information is breast contour information and information of a focus area in the breast, and the information output form is a relative coordinate value and a weight value of a structural tissue dense point in the breast; (2) training a pre-constructed convolutional neural network by using a breast cancer medical image file with a preset amount of labels, and updating parameters in the convolutional neural network in an iterative training process; stopping training when the loss function is minimum and the accuracy is balanced, and quantifying the parameter value to obtain a trained convolutional neural network; (3) testing the trained convolutional neural network by using a breast cancer mapping file with a preset amount of marks, and preventing the convolutional neural network from being over-fitted and under-learned to obtain a trained convolutional neural network;
the file uploading module is used for uploading the medical image file of the breast cancer to the trained convolutional neural network to obtain breast contour information and information of a focus area in the breast; wherein the breast contour information includes dense coordinate points about a breast contour, and the information of the lesion area in the breast includes a relative position coordinate point of the lesion under the breast contour and a coordinate point of lesion edge information;
for constructing 3D visualization breast area imaging by three.js plug-in based on the obtained breast contour information and information of the lesion area in the breast.
10. The system for 3D visualization of breast cancer lesions as claimed in claim 9, further comprising:
the breast cancer diagnosis report generation module is used for generating a breast cancer diagnosis report of the patient according to the basic information of the patient and the breast cancer 3D imaging result; wherein the diagnosis report takes a hospital standard diagnosis report as a template;
a breast cancer diagnosis report management module for managing breast cancer diagnosis reports, comprising: all diagnostic reports are presented in a list format, providing an interface to view, download and delete breast cancer diagnostic reports.
CN202010125871.1A 2020-02-27 2020-02-27 3D visualization method and system for breast cancer focus Active CN111383328B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010125871.1A CN111383328B (en) 2020-02-27 2020-02-27 3D visualization method and system for breast cancer focus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010125871.1A CN111383328B (en) 2020-02-27 2020-02-27 3D visualization method and system for breast cancer focus

Publications (2)

Publication Number Publication Date
CN111383328A true CN111383328A (en) 2020-07-07
CN111383328B CN111383328B (en) 2022-05-20

Family

ID=71219764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010125871.1A Active CN111383328B (en) 2020-02-27 2020-02-27 3D visualization method and system for breast cancer focus

Country Status (1)

Country Link
CN (1) CN111383328B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112164462A (en) * 2020-09-27 2021-01-01 华南理工大学 Breast cancer risk assessment method, system, medium and equipment
CN115005851A (en) * 2022-06-09 2022-09-06 上海市胸科医院 Nodule positioning method and device based on triangulation positioning and electronic equipment
CN117368476A (en) * 2023-09-25 2024-01-09 西安交通大学医学院第一附属医院 Application of detection reagent for seven metabolic markers in preparation of breast cancer diagnosis and prognosis products

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143035A (en) * 2013-05-10 2014-11-12 上海联影医疗科技有限公司 Method for partitioning breast lesion
CN104376199A (en) * 2014-11-05 2015-02-25 宁波市科技园区明天医网科技有限公司 Method for intelligently generating breast report lesion schematic diagram
CN105193452A (en) * 2015-10-10 2015-12-30 北京长江源科技发展有限公司 Method and HIFU treatment device for monitoring body position movement based on ultrasonic images
CN106203432A (en) * 2016-07-14 2016-12-07 杭州健培科技有限公司 A kind of localization method of area-of-interest based on convolutional Neural net significance collection of illustrative plates
CN106339591A (en) * 2016-08-25 2017-01-18 汤平 Breast cancer prevention self-service health cloud service system based on deep convolutional neural network
CN106874700A (en) * 2017-04-01 2017-06-20 上海术理智能科技有限公司 Surgical simulation method, surgical simulation device and electronic equipment based on Web
CN108615237A (en) * 2018-05-08 2018-10-02 上海商汤智能科技有限公司 A kind of method for processing lung images and image processing equipment
CN108665456A (en) * 2018-05-15 2018-10-16 广州尚医网信息技术有限公司 The method and system that breast ultrasound focal area based on artificial intelligence marks in real time
CN109001211A (en) * 2018-06-08 2018-12-14 苏州赛克安信息技术有限公司 Welds seam for long distance pipeline detection system and method based on convolutional neural networks
CN109493328A (en) * 2018-08-31 2019-03-19 上海联影智能医疗科技有限公司 Medical image display method checks equipment and computer equipment
US20190087726A1 (en) * 2017-08-30 2019-03-21 The Board Of Regents Of The University Of Texas System Hypercomplex deep learning methods, architectures, and apparatus for multimodal small, medium, and large-scale data representation, analysis, and applications
CN109727243A (en) * 2018-12-29 2019-05-07 无锡祥生医疗科技股份有限公司 Breast ultrasound image recognition analysis method and system
CN109886307A (en) * 2019-01-24 2019-06-14 西安交通大学 A kind of image detecting method and system based on convolutional neural networks
US20190206056A1 (en) * 2017-12-29 2019-07-04 Leica Biosystems Imaging, Inc. Processing of histology images with a convolutional neural network to identify tumors
CN110232383A (en) * 2019-06-18 2019-09-13 湖南省华芯医疗器械有限公司 A kind of lesion image recognition methods and lesion image identifying system based on deep learning model
CN110599476A (en) * 2019-09-12 2019-12-20 腾讯科技(深圳)有限公司 Disease grading method, device, equipment and medium based on machine learning
CN110827294A (en) * 2019-10-31 2020-02-21 北京推想科技有限公司 Network model training method and device and focus area determination method and device

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143035A (en) * 2013-05-10 2014-11-12 上海联影医疗科技有限公司 Method for partitioning breast lesion
CN104376199A (en) * 2014-11-05 2015-02-25 宁波市科技园区明天医网科技有限公司 Method for intelligently generating breast report lesion schematic diagram
CN105193452A (en) * 2015-10-10 2015-12-30 北京长江源科技发展有限公司 Method and HIFU treatment device for monitoring body position movement based on ultrasonic images
CN106203432A (en) * 2016-07-14 2016-12-07 杭州健培科技有限公司 A kind of localization method of area-of-interest based on convolutional Neural net significance collection of illustrative plates
CN106339591A (en) * 2016-08-25 2017-01-18 汤平 Breast cancer prevention self-service health cloud service system based on deep convolutional neural network
CN106874700A (en) * 2017-04-01 2017-06-20 上海术理智能科技有限公司 Surgical simulation method, surgical simulation device and electronic equipment based on Web
US20190087726A1 (en) * 2017-08-30 2019-03-21 The Board Of Regents Of The University Of Texas System Hypercomplex deep learning methods, architectures, and apparatus for multimodal small, medium, and large-scale data representation, analysis, and applications
US20190206056A1 (en) * 2017-12-29 2019-07-04 Leica Biosystems Imaging, Inc. Processing of histology images with a convolutional neural network to identify tumors
CN108615237A (en) * 2018-05-08 2018-10-02 上海商汤智能科技有限公司 A kind of method for processing lung images and image processing equipment
CN108665456A (en) * 2018-05-15 2018-10-16 广州尚医网信息技术有限公司 The method and system that breast ultrasound focal area based on artificial intelligence marks in real time
CN109001211A (en) * 2018-06-08 2018-12-14 苏州赛克安信息技术有限公司 Welds seam for long distance pipeline detection system and method based on convolutional neural networks
CN109493328A (en) * 2018-08-31 2019-03-19 上海联影智能医疗科技有限公司 Medical image display method checks equipment and computer equipment
CN109727243A (en) * 2018-12-29 2019-05-07 无锡祥生医疗科技股份有限公司 Breast ultrasound image recognition analysis method and system
CN109886307A (en) * 2019-01-24 2019-06-14 西安交通大学 A kind of image detecting method and system based on convolutional neural networks
CN110232383A (en) * 2019-06-18 2019-09-13 湖南省华芯医疗器械有限公司 A kind of lesion image recognition methods and lesion image identifying system based on deep learning model
CN110599476A (en) * 2019-09-12 2019-12-20 腾讯科技(深圳)有限公司 Disease grading method, device, equipment and medium based on machine learning
CN110827294A (en) * 2019-10-31 2020-02-21 北京推想科技有限公司 Network model training method and device and focus area determination method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
张承杰等: "基于空间FCM与MRF方法的乳腺MRI序列三维病灶分割研究", 《中国生物医学工程学报》 *
李阳: "基于病灶位置与内容的乳腺图像检索方法的研究与实现", 《中国优秀博硕士学位论文全文数据库(硕士)医药卫生科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112164462A (en) * 2020-09-27 2021-01-01 华南理工大学 Breast cancer risk assessment method, system, medium and equipment
CN112164462B (en) * 2020-09-27 2022-05-24 华南理工大学 Breast cancer risk assessment method, system, medium and equipment
CN115005851A (en) * 2022-06-09 2022-09-06 上海市胸科医院 Nodule positioning method and device based on triangulation positioning and electronic equipment
CN117368476A (en) * 2023-09-25 2024-01-09 西安交通大学医学院第一附属医院 Application of detection reagent for seven metabolic markers in preparation of breast cancer diagnosis and prognosis products
CN117368476B (en) * 2023-09-25 2024-03-08 西安交通大学医学院第一附属医院 Application of detection reagent for seven metabolic markers in preparation of breast cancer diagnosis and prognosis products

Also Published As

Publication number Publication date
CN111383328B (en) 2022-05-20

Similar Documents

Publication Publication Date Title
US10818048B2 (en) Advanced medical image processing wizard
US10713856B2 (en) Medical imaging system based on HMDS
US7130457B2 (en) Systems and graphical user interface for analyzing body images
CN111383328B (en) 3D visualization method and system for breast cancer focus
US6901277B2 (en) Methods for generating a lung report
JP6616306B2 (en) Radiation therapy system with advanced graphical user interface
CN110517238B (en) AI three-dimensional reconstruction and human-computer interaction visualization network system for CT medical image
US20190051215A1 (en) Training and testing system for advanced image processing
US20030028401A1 (en) Customizable lung report generator
CN110050281A (en) Learn the annotation of the object in image
US20140341449A1 (en) Computer system and method for atlas-based consensual and consistent contouring of medical images
CN105167793A (en) Image display apparatus, display control apparatus and display control method
US10521908B2 (en) User interface for displaying simulated anatomical photographs
US20170262584A1 (en) Method for automatically generating representations of imaging data and interactive visual imaging reports (ivir)
US11037659B2 (en) Data-enriched electronic healthcare guidelines for analytics, visualization or clinical decision support
US20080132781A1 (en) Workflow of a service provider based CFD business model for the risk assessment of aneurysm and respective clinical interface
CN111223556A (en) Integrated medical image visualization and exploration
US20190188849A1 (en) Generating simulated photographic anatomical slices
US20190188850A1 (en) Medical image exam navigation using simulated anatomical photographs
US20230334663A1 (en) Development of medical imaging ai analysis algorithms leveraging image segmentation
CN106530386A (en) Volume rendering method and system for medical images
RU2814790C1 (en) Method for detecting oncological diseases in pelvic organs and system for implementing method
Zimeras et al. Interactive Tele‐Radiological Segmentation Systems for Treatment and Diagnosis
US10878025B1 (en) Field of view navigation tracking
FI126036B (en) Computer-aided medical imaging report

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant