CN116563246A - Training sample generation method and device for medical image aided diagnosis - Google Patents

Training sample generation method and device for medical image aided diagnosis Download PDF

Info

Publication number
CN116563246A
CN116563246A CN202310528602.3A CN202310528602A CN116563246A CN 116563246 A CN116563246 A CN 116563246A CN 202310528602 A CN202310528602 A CN 202310528602A CN 116563246 A CN116563246 A CN 116563246A
Authority
CN
China
Prior art keywords
data
digital
human body
digital human
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310528602.3A
Other languages
Chinese (zh)
Other versions
CN116563246B (en
Inventor
余茜茜
乔波
杨坤
王忠新
栾俊达
任银垠
袁毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202310528602.3A priority Critical patent/CN116563246B/en
Publication of CN116563246A publication Critical patent/CN116563246A/en
Application granted granted Critical
Publication of CN116563246B publication Critical patent/CN116563246B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Quality & Reliability (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Multimedia (AREA)
  • Radiology & Medical Imaging (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)
  • Architecture (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Databases & Information Systems (AREA)

Abstract

The specification discloses a training sample generation method and device for medical image auxiliary diagnosis. The method comprises the following steps: based on different human body structures, constructing a digital human body model with at least one physiological attribute, setting lesion tissues in at least part of the digital human body model, generating configuration files corresponding to the digital human body models according to component information of the tissues, importing the digital human body models into a simulation environment based on the configuration files, acquiring data around the digital human body model through virtual image equipment preset in the simulation environment, determining projection data of the digital human body model at all angles, processing the projection data to obtain target image data in an imaging format corresponding to the virtual image equipment, and generating training samples for training the medical image auxiliary diagnosis model according to the target image data.

Description

Training sample generation method and device for medical image aided diagnosis
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and apparatus for generating a training sample for medical image assisted diagnosis.
Background
With the rise of deep convolutional neural networks, the accumulation of data and the great improvement of computing power, deep learning techniques are beginning to be applied to various medical fields such as human body structural analysis, segmentation of focal zones, early diagnosis of diseases and detection of focal zones, and provide functions of disease prompt and auxiliary diagnosis.
The model of the medical image auxiliary diagnosis algorithm can perform auxiliary diagnosis on medical images acquired by image equipment such as electronic computed tomography (Computed Tomography, CT) and the like, but the training process of the model is required to rely on a large amount of medical image data as a training sample, so that the diagnosis effect and the diagnosis precision of the medical image auxiliary diagnosis model can be ensured.
However, at present, structured medical image data are fewer, training samples are usually segmented or marked by doctors, subjective influences of different doctors are easy to cause individual differences, model precision is difficult to guarantee, in addition, the medical image data usually relate to privacy of patients, and the medical image data are difficult to directly use, so that effective training samples which can be used at present are fewer, and accurate training of a medical image auxiliary diagnosis model is difficult to carry out.
Therefore, how to obtain a training sample of an effective medical image auxiliary diagnosis model, so as to train the medical image auxiliary diagnosis model and improve the diagnosis precision of the model is a problem to be solved urgently.
Disclosure of Invention
The present disclosure provides a training sample generating method and apparatus for medical image assisted diagnosis, so as to partially solve the above-mentioned problems in the prior art.
The technical scheme adopted in the specification is as follows:
the specification provides a training sample generation method for medical image aided diagnosis, which comprises the following steps:
constructing a digital mannequin of at least one physiological attribute based on the body structures of different human bodies, and setting lesion tissues in at least part of the digital mannequin, wherein the physiological attribute is used for representing at least one of age, gender and health condition of the digital mannequin;
determining composition information of different tissues in each digital human body model;
generating configuration files corresponding to the digital human models according to the component information, and importing the digital human models into a simulation environment based on the configuration files;
aiming at each digital human model imported into the simulation environment, carrying out data acquisition around the digital human model through virtual image equipment preset in the simulation environment, and determining projection data of the digital human model at all angles;
And processing the projection data to obtain target image data in an imaging format corresponding to the virtual image equipment, and generating a training sample for training the medical image auxiliary diagnosis model according to the target image data.
Optionally, generating a configuration file corresponding to each digital human model according to the component information specifically includes:
and generating configuration files corresponding to the digital human models according to the component information and preset colors corresponding to the tissues.
Optionally, importing each digital human model into a simulation environment based on the configuration file specifically includes:
converting the format of the digital human body model into a target format matched with the simulation environment for each digital human body model;
and importing the digital human body model in the target format into the simulation environment, and configuring the digital human body model through the configuration file.
Optionally, the target format includes: a tetrahedral mesh format;
the configuration file is used for configuring the digital human body model, and the configuration file specifically comprises the following steps:
and endowing the grids corresponding to each organization in the tetrahedral grid-format digital human body model with material properties through the component information contained in the configuration file.
Optionally, before importing each digital mannequin into the simulation environment, the method further comprises:
and constructing virtual image equipment in the simulation environment, and setting at least one parameter of detector size, detector resolution, image pixel size, radioactive source particle type, radioactive source energy distribution, radioactive source energy range, detector and radioactive source position and a physical process model of the virtual image equipment.
Optionally, for each digital mannequin imported into the simulation environment, performing data acquisition around the digital mannequin by using a virtual image device preset in the simulation environment, and determining projection data of the digital mannequin at various angles, wherein the method specifically comprises the following steps:
and according to the set acquisition period and the set acquisition frequency, carrying out image acquisition around the digital human body model through the virtual image equipment, and determining projection data of the digital human body model at all angles.
Optionally, processing the projection data to obtain target image data in an imaging format corresponding to the virtual image device, which specifically includes:
reconstructing three-dimensional image data corresponding to the digital human body model according to the projection data at each angle;
And converting the data format of the three-dimensional image data into an imaging format corresponding to the virtual image equipment to obtain the target image data.
Optionally, reconstructing three-dimensional image data corresponding to the digital human body model according to the projection data at each angle, which specifically includes:
synthesizing the projection data at each angle into a three-dimensional data matrix of the projection data;
weighting the three-dimensional data matrix, and filtering the weighted data to obtain filtered data;
and carrying out interpolation and back projection processing on the filtered data to obtain the reconstructed three-dimensional image data.
Optionally, converting the data format of the three-dimensional image data into an imaging format corresponding to the virtual image device to obtain the target image data, which specifically includes:
and slicing the three-dimensional image data according to preset section display parameters to obtain image data corresponding to each section, wherein the image data is used as the target image data.
Optionally, the display parameters include: at least one of a section size, a section resolution, a sampling accuracy, a section center, and a section rotation angle.
Optionally, slicing the three-dimensional image data to obtain image data corresponding to each section, which specifically includes:
and slicing the three-dimensional image data to obtain a corresponding section image of the digital human body model on at least one section of an axial plane, a sagittal plane and a coronal plane.
Optionally, the method further comprises:
training the medical image auxiliary diagnosis model through the training sample;
and deploying the trained medical image auxiliary diagnosis model, and inputting the target image data into the medical image auxiliary diagnosis model after the target image data of the user are acquired, so as to determine a diagnosis result aiming at the user through the medical image auxiliary diagnosis model.
The present specification provides a training sample generation apparatus for medical image-assisted diagnosis, comprising:
the system comprises a construction module, a control module and a control module, wherein the construction module is used for constructing a digital human body model of at least one physiological attribute based on human body structures of different human bodies and setting pathological tissues in at least part of the digital human body model, and the physiological attribute is used for representing at least one of age, sex and health condition of the digital human body model;
The determining module is used for determining component information of different tissues in each digital human body model;
the importing module generates configuration files corresponding to the digital human models according to the component information and imports the digital human models into a simulation environment based on the configuration files;
the acquisition module is used for acquiring data around each digital human model of the simulation environment by a virtual image device preset in the simulation environment, and determining projection data of the digital human model at all angles;
the generation module is used for processing the projection data to obtain target image data in an imaging format corresponding to the virtual image equipment, and generating a training sample for training the medical image auxiliary diagnosis model according to the target image data.
The present specification provides a computer readable storage medium storing a computer program which when executed by a processor implements the training sample generation method for medical image assisted diagnosis described above.
The present specification provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the training sample generation method for medical image assisted diagnosis described above when executing the program.
The above-mentioned at least one technical scheme that this specification adopted can reach following beneficial effect:
in the training sample generation method for medical image auxiliary diagnosis provided by the specification, a digital human body model with at least one physiological attribute can be constructed based on different human body structures, lesion tissues are arranged in at least part of the digital human body model, configuration files corresponding to the digital human body models are generated according to component information of the tissues, the digital human body models are imported into a simulation environment based on the configuration files, data acquisition is performed around the digital human body model through virtual image equipment preset in the simulation environment, projection data of the digital human body model at all angles are determined, the projection data are processed, target image data in an imaging format corresponding to the virtual image equipment are obtained, and the target image data are used as training samples for training the medical image auxiliary diagnosis model.
According to the method, the pathological tissue can be set in advance in the process of constructing the digital human body model, the label equivalent to the training sample is known, and the digital human body model is further subjected to data acquisition through the virtual image equipment in the simulation environment, so that a final training sample is generated, therefore, the image data of a user is not required to be used as the training sample, the privacy of the user is ensured, the individual difference caused by the subjective diagnosis of different doctors is avoided in the diagnosis condition, and the training precision of training the medical image auxiliary diagnosis model is improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the specification, illustrate and explain the exemplary embodiments of the present specification and their description, are not intended to limit the specification unduly. In the drawings:
fig. 1 is a schematic flow chart of a method for generating a training sample for medical image-assisted diagnosis provided in the present specification;
fig. 2 is a schematic diagram of an imaging principle of a virtual imaging device provided in the present disclosure;
FIG. 3 is a schematic diagram of a process for generating training samples for medical image-aided diagnosis models provided in the present specification;
FIG. 4 is a schematic diagram of a training sample generating device for medical image assisted diagnosis provided in the present specification;
fig. 5 is a schematic diagram of an electronic device corresponding to fig. 1 provided in the present specification.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present specification more apparent, the technical solutions of the present specification will be clearly and completely described below with reference to specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
Fig. 1 is a flowchart of a training sample generation method for medical image assisted diagnosis provided in the present specification, which includes the following steps:
s101: a digital mannequin of at least one physiological attribute is constructed based on the body structure of the different human bodies, and lesion tissue is provided in at least part of the digital mannequin, the physiological attribute being used to characterize at least one of age, gender and health of the digital mannequin.
S102: component information for different tissues in each digital manikin is determined.
For an intelligent medical image aided diagnosis algorithm based on deep learning, the training of a network model is required to depend on a large amount of medical image data, the data is as important as the model, only the algorithm model is mastered, but the lack of data with sufficient quantity and quality cannot obtain a good training effect, and an incomplete unbalanced data set can cause the deviation of a finally obtained rule set, so that an algorithm discrimination phenomenon is formed. Therefore, a unified, standard, large-scale high-quality data set is necessary to provide a basic guarantee for the development of related research.
Based on the above, the present disclosure provides a method for generating a training sample of a medical image-assisted diagnosis model, which collects projection data of a digital human model at various angles through a virtual image device in a simulation environment, and further processes the projection data to obtain target image data in an imaging format corresponding to the virtual image device as the training sample.
In the present specification, an execution subject for executing the method for generating the medical image-assisted diagnosis model training sample may be a designated device such as a server, and for convenience of description, the present specification uses only the server as an execution subject, and describes a method for generating the medical image-assisted diagnosis model training sample provided in the present specification.
Wherein the server is required to construct a digital phantom of at least one physiological property based on the anatomy of different human bodies, and to set lesion tissue in at least part of the digital phantom, the physiological property being used to characterize at least one of age, sex and health of the digital phantom.
The human body structure of the human body can be anatomical structures of different human bodies, and in addition, the human body structure can also contain information such as physiological parameters (such as blood oxygen saturation), element composition, content and the like of the human body. Based on the above information, the server can construct digitized mannequins of various physiological attributes such as "single organ/multiple organ/whole body", "no focus/focus", "male/female", "infant/adult/elderly", etc. Of course, the digital manikin constructed by the server may also include a static model and a dynamic model, and the dynamic model may dynamically display, for example, the blood flow of the human body and the movement of the organs.
In the process of constructing the digital human body model, according to the construction standard of the digital human body model and human anatomy structure, physique and quality reference data in various medical images and other materials, a voxel model is constructed based on a computer aided design (Computer Aided Design, CAD) technology, and a plurality of types of digital human body models are generated based on a NURBS method model, a polygonal grid model and the like.
Further, for digital manikins of various physiological properties that are generated, the server may set lesion tissue in a portion thereof.
Specifically, the server may set the original healthy tissue in the digital human body model as a pathological state, or add some new pathological tissues in the digital human body model, for example, according to morphological characteristics of the focus, add lesions such as nodules, tumors and the like in the digital human body organ model, so as to obtain the digital human body model carrying the focus.
The server can then determine the composition information of different tissues according to the elemental composition of the human body, the elemental content of the major tissue organs, the chemical composition of the human body and the density reference data, wherein the composition information is used for representing the composition components (such as mucous membrane, fat, muscle, bone and the like) and the density of each tissue.
The server can determine the composition information of each organ tissue module in the constructed digital human body model according to the human body chemical composition and the density reference data, taking lung as an example: the lung is composed of modules such as left and right lung lobes, main bronchi, bronchioles and multilayer films of the trachea, the lung lobe density is 0.413g/cm3, the trachea density is 1.031g/cm3, in addition, the server can give component information to constructed pathological tissues according to pathological components of the focus, such as lung nodules: including round bump, branch She Zheng, thorn-like protrusion, nodule, cavitation, and cavity, the density is higher than the normal lung density.
S103: and generating configuration files corresponding to the digital human models according to the component information, and importing the digital human models into a simulation environment based on the configuration files.
Before importing the digital mannequin into the simulation environment, the server may construct a virtual image device in the simulation environment, where in this specification, the virtual image device may include: a virtual electron computed tomography (Computed Tomography, CT) device, a virtual Single Photon Emission Computed Tomography (SPECT) device, a virtual positron emission tomography (Positron Emission Tomography, PET) device, and the like, although other virtual imaging devices may be included, and the present disclosure is not limited thereto.
The server can accurately simulate a complex geometric model and an actual physical process of the equipment based on a Monte Carlo method according to the structure and the working principle of medical image equipment such as CT, SPECT, PET and the like, and comprises a geometric structure, an electronic response model and motion of a virtual image equipment detector; particle type, energy distribution, energy range of the radiation source; physical processes such as electromagnetic processes, hadron processes, optical physical processes and the like, and selection of physical models. Meanwhile, preset parameters of the simulation equipment system (such as parameters of detector size, energy range of a source, positions of the detector and the radioactive source and the like) need to be matched with the size of the constructed digital human body model.
Taking a CT device as an example, the server may build a CT system using a monte carlo software package, i.e. write related macro files, including: setting parameters such as the geometric structure, the size, the material, the position, the pixels (comprising 128×128, 512×512, 1024×1024), the resolution (comprising energy resolution, time resolution, space resolution and the like) and the like of the CT detection scanner; setting parameters such as shape, size, position, particle type (electrons, gammas and the like), emission direction, energy distribution (monoenergetic "Mono", linear "Lin", power law "Pow", index "Exp", gaussian "Gauss", bremsstrahlung "Bbody", blackbody "Bbody", universe diffuse reflection gamma ray "Cdg", custom histogram "User Spectrum", arbitrary point Spectrum "Arb" and the like) and activity of a radioactive source; setting electromagnetic processes such as simulated photoelectric effect, compton scattering, rayleigh scattering, junction pair generation and the like; the digitizer is arranged to simulate the behavior of the scanner detector and the signal processing chain, including deposited energy, momentum before and after interaction etc., the output of the digitizer corresponds to the signal actually processed by the front-end electronics (FEE).
After the virtual image device is constructed, the server can adapt the digital body model to the simulation environment.
Specifically, the server may convert the digital phantom to a target format matched with the simulation environment, where the target format may be a tetrahedral mesh format in this specification, and includes 4 nodes (. Ele) of each tetrahedral mesh and coordinates (. Node) of each node, that is, the digital phantom in the format is formed by a large number of tetrahedral meshes, and each organ or tissue of the meshed digital phantom is formed by a mesh structure, and a number or other identification tag is attached.
Meanwhile, the server can generate configuration files corresponding to the digital human models according to the component information and preset colors corresponding to the organizations.
And then the server can assign material properties to voxel modules (grids) of each organ tissue in the font model according to the material information in the configuration file, so as to supplement materials (by defining the density and composition of human tissue in a provided material library) and colors (by setting values of R, G, B three channels) of each region in the tetrahedral grid, complete construction of an attribute mapping file (material, color. Dat), and further call the digital human body model and the attribute mapping file in the process of writing the Gate macro file.
S104: and aiming at each digital human body model imported into the simulation environment, carrying out data acquisition around the digital human body model through virtual image equipment preset in the simulation environment, and determining projection data of the digital human body model at all angles.
After the digital human body model is imported into the simulation environment, the server can further set other parameters of the simulation system, such as ASCII, ROOT, projection set, sinogram and other formats, if defining data output; defining a time slice parameter (the beginning and the end of acquisition) and a slice duration (each slice corresponds to one analog acquisition), determining an acquisition period according to the time slice parameter, determining an acquisition frequency according to the slice duration, further performing image acquisition around the digital human model in a simulation environment through virtual image equipment according to the acquisition period and the acquisition frequency, and determining projection data of the digital human model at various angles.
Taking a CT device as an example, the server can set data output as a CT image, and the CT image is a binary matrix (. Dat) of floating point numbers, and each pixel stores the physical action number of the corresponding position of the phantom; setting the start time and the acquisition time to be 0s and 360s (the acquisition period is 360 s), and the slice duration to be 1s (the acquisition frequency is 1s each time), the virtual imaging device can output 360 pieces of projection data corresponding to 0-360 ° under the above parameters, so as to facilitate understanding.
Fig. 2 is a schematic diagram of an imaging principle of a virtual image device provided in the present disclosure.
The virtual image equipment comprises a detector and a radioactive source, wherein light rays emitted by the radioactive source penetrate through the digital human body model and then are projected onto the detector, so that projection data in the projection direction are obtained by the detector.
S105: and processing the projection data to obtain target image data in an imaging format corresponding to the virtual image equipment, and generating a training sample for training the medical image auxiliary diagnosis model according to the target image data.
The virtual image equipment can reconstruct three-dimensional image data corresponding to the digital human body model according to projection data at all angles, and further convert the data format of the three-dimensional image data into an imaging format corresponding to the virtual image equipment, so as to obtain target image data.
Specifically, the virtual image device can complete three-dimensional reconstruction of the projection file set of 0-360 degrees acquired by scanning the digital human body model by adopting an FDK reconstruction algorithm. First, 360 projection files (. Dat) can be read into and combined with a three-dimensional data matrix of 256×256×360 by Matlab, and then the three-dimensional data matrix is weighted, i.e. multiplied by a cosine function of the angle of incidence of the cone beam radiation (which is equal to the distance DSD between the radiation source and the detector multiplied by the square of the sum of the x, y coordinates and DSD of each pixel of the two-dimensional projection plane).
Filtering the weighted data line by line (including filtering types such as slope, cosine, hamming, hanning and the like), interpolating and weighting back projecting the filtered data (interpolation algorithm includes linear interpolation method, cubic spline interpolation method, weighting interpolation method and the like, the weight function depends on the distance from a reconstruction point to a focus and the like), realizing three-dimensional reconstruction of digital models, obtaining three-dimensional image data, converting the three-dimensional image data into data formats such as DICOM or jpg and the like, obtaining target image data, and taking the target image data as training samples of a medical image auxiliary diagnosis model.
Further, in the process of converting the data format of the three-dimensional image data into the image data in the imaging format corresponding to the virtual image device, the server may further reconstruct the three-dimensional image data by using a multi-plane three-dimensional reconstruction (Multi Planar Reconstruction, MPR) algorithm.
Firstly, setting parameters of three-section (section) display, including section size (based on the maximum in xyz three-dimension), section resolution (image pixel size), sampling precision (relative size of section resolution and section size), section center, section rotation angle (obtaining sections with different angles), then generating grid, setting pixel coordinates of the displayed section according to the section rotation angle, further sampling data, reading data of corresponding coordinates in three-dimensional volume data according to the corresponding pixel coordinates of the section rotation angle, obtaining and displaying axial, sagittal or coronal images, and accordingly taking image data corresponding to each section as target image data.
After the target image data is acquired, the server can label the lesion tissue which is set in the digital human body model before as a label, so that a training sample for training the medical image auxiliary diagnosis model is generated.
In the present specification, three-dimensional image data after reconstruction in DICOM or jpg format may be used as the training sample, and of course, tangential image data of an axial plane, a sagittal plane, or a coronal plane may be used as the training sample. For ease of understanding, the present disclosure provides a schematic diagram of a process for generating a training sample of a medical image-aided diagnosis model, as shown in fig. 3.
Fig. 3 is a schematic diagram of a process for generating a training sample of a medical image-aided diagnosis model provided in the present specification.
The server can construct digital human body models with various physiological attributes, determine component information of each organization, construct virtual image equipment in a simulation environment, convert formats of the digital human body models into tetrahedral grids and adapt the tetrahedral grids to the simulation environment.
And writing a configuration file by using the component information and importing the digital human body model into a simulation environment, and simulating the physical process of the image equipment and the digital human body model in the simulation environment, so as to acquire projection data of all angles.
After the projection data of each angle are obtained, the three-dimensional image data can be reconstructed based on the projection data, and the format of the reconstructed three-dimensional image data is converted into an image format corresponding to virtual image equipment such as DICOM or jpg and the like and is used as a training sample.
In addition, the server can perform multi-plane three-dimensional reconstruction based on the three-dimensional image data, so that three-section image data is obtained as a training sample.
After the training sample is obtained, the server can train the medical image auxiliary diagnosis model based on the training sample to obtain a trained medical image auxiliary diagnosis model.
For example, the server may input the training sample into the medical image auxiliary diagnosis model to be trained by using a preset lesion tissue in the digital human body model corresponding to the training sample as a label, thereby determining a diagnosis result through the medical image auxiliary diagnosis model, and training the medical image auxiliary diagnosis model by using a deviation between the minimized determined diagnosis result and an actual label of the training sample as an optimization target.
The server can deploy the trained medical image auxiliary diagnosis model, and input the target image data into the medical image auxiliary diagnosis model after acquiring the target image data of the user, so as to identify and diagnose the focus of the user through the medical image auxiliary diagnosis model, thereby determining the diagnosis result aiming at the user.
According to the method, the digital human body model and the virtual image equipment adopted by the scheme are both a digital model and a simulation system, and the method has the advantages of simplicity in acquisition, flexibility in design, short design period, reusability, controllable construction conditions and the like, improves the acquisition efficiency of a data set, and is more beneficial to realizing the standardization of the data set.
The digital model constructed by the scheme and various parameter settings of the system are known, the generated data set has priori knowledge, and compared with the data set manually segmented or marked by doctor expert, the digital model is a gold standard in the real sense, and the subjective influence of human beings is reduced.
The phantom adopted by the scheme can be constructed based on document data such as ICRP publications, national standards and the like, and a data set can be designed and adjusted in a targeted manner by people, so that the algorithm can be generalized on different categories, the equalization of evaluation data is ensured, meanwhile, the problems of personal privacy, data attribution, data ethics and the like of patients are not involved, and a large number of open use can be realized.
The foregoing describes one or more training sample generating methods for medical image assisted diagnosis according to the present disclosure, and based on the same ideas, the present disclosure further provides a corresponding training sample generating apparatus for medical image assisted diagnosis, as shown in fig. 4.
Fig. 4 is a schematic diagram of a training sample generating device for medical image-assisted diagnosis provided in the present specification, including:
a construction module 401 for constructing a digital phantom of at least one physiological attribute based on the anatomy of different human bodies, and setting lesion tissue in at least part of the digital phantom, the physiological attribute being used to characterize at least one of age, gender and health of the digital phantom;
a determining module 402, configured to determine component information of different tissues in each digital human body model;
an importing module 403, configured to generate a configuration file corresponding to each digital human model according to the component information, and import each digital human model into a simulation environment based on the configuration file;
the acquisition module 404 is configured to perform data acquisition around the digital mannequin by using a virtual image device preset in the simulation environment for each digital mannequin imported into the simulation environment, so as to determine projection data of the digital mannequin at various angles;
the generating module 405 processes the projection data to obtain target image data in an imaging format corresponding to the virtual image device, and generates a training sample for training the medical image auxiliary diagnostic model according to the target image data.
Optionally, the importing module 403 is specifically configured to generate a configuration file corresponding to each digital mannequin according to the component information and the preset color corresponding to each organization.
Optionally, the importing module 403 is specifically configured to, for each digital mannequin, convert the format of the digital mannequin into a target format matched with the simulation environment; and importing the digital human body model in the target format into the simulation environment, and configuring the digital human body model through the configuration file.
Optionally, the target format includes: a tetrahedral mesh format;
the importing module 403 is specifically configured to assign a material attribute to a grid corresponding to each organization in the tetrahedral mesh format digital mannequin according to the component information included in the configuration file.
Optionally, before importing each digital mannequin into the simulation environment, the building module 401 is further configured to:
and constructing virtual image equipment in the simulation environment, and setting at least one parameter of detector size, detector resolution, image pixel size, radioactive source particle type, radioactive source energy distribution, radioactive source energy range, detector and radioactive source position and a physical process model of the virtual image equipment.
Optionally, the acquisition module 404 is specifically configured to perform image acquisition around the digital human body model by using the virtual image device according to the set acquisition period and the set acquisition frequency, so as to determine projection data of the digital human body model at various angles.
Optionally, the generating module 405 is specifically configured to reconstruct three-dimensional image data corresponding to the digital human body model according to the projection data on the respective angles; and converting the data format of the three-dimensional image data into an imaging format corresponding to the virtual image equipment to obtain the target image data.
Optionally, the generating module 405 is specifically configured to synthesize the projection data at the respective angles into a three-dimensional data matrix of the projection data; weighting the three-dimensional data matrix, and filtering the weighted data to obtain filtered data; and carrying out interpolation and back projection processing on the filtered data to obtain the reconstructed three-dimensional image data.
Optionally, the generating module 405 is specifically configured to slice the three-dimensional image data according to a preset slice display parameter, so as to obtain image data corresponding to each slice, which is used as the target image data.
Optionally, the display parameters include: at least one of a section size, a section resolution, a sampling accuracy, a section center, and a section rotation angle.
Optionally, the generating module 405 is specifically configured to perform slicing processing on the three-dimensional image data to obtain a corresponding slice image of the digital manikin on at least one slice of an axial plane, a sagittal plane and a coronal plane.
Optionally, the apparatus further comprises:
a training module 406, configured to train the medical image auxiliary diagnostic model through the training sample; and deploying the trained medical image auxiliary diagnosis model, and inputting the target image data into the medical image auxiliary diagnosis model after the target image data of the user are acquired, so as to determine a diagnosis result aiming at the user through the medical image auxiliary diagnosis model.
The present disclosure also provides a computer readable storage medium storing a computer program operable to perform a method of generating a training sample of a medical image-aided diagnosis model provided in fig. 1.
The present specification also provides a schematic structural diagram of an electronic device corresponding to fig. 1 shown in fig. 5. At the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, as illustrated in fig. 5, although other hardware required by other services may be included. The processor reads the corresponding computer program from the nonvolatile memory into the memory and then runs the computer program to realize the method for generating the training sample of the medical image auxiliary diagnosis model described in the above figure 1. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present description, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
Improvements to one technology can clearly distinguish between improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) and software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present description is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the specification. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, the present specification may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present description can take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present disclosure and is not intended to limit the disclosure. Various modifications and alterations to this specification will become apparent to those skilled in the art. Any modifications, equivalent substitutions, improvements, or the like, which are within the spirit and principles of the present description, are intended to be included within the scope of the claims of the present description.

Claims (15)

1. A training sample generation method for medical image-aided diagnosis, comprising:
constructing a digital mannequin of at least one physiological attribute based on the body structures of different human bodies, and setting lesion tissues in at least part of the digital mannequin, wherein the physiological attribute is used for representing at least one of age, gender and health condition of the digital mannequin;
Determining composition information of different tissues in each digital human body model;
generating configuration files corresponding to the digital human models according to the component information, and importing the digital human models into a simulation environment based on the configuration files;
aiming at each digital human model imported into the simulation environment, carrying out data acquisition around the digital human model through virtual image equipment preset in the simulation environment, and determining projection data of the digital human model at all angles;
and processing the projection data to obtain target image data in an imaging format corresponding to the virtual image equipment, and generating a training sample for training the medical image auxiliary diagnosis model according to the target image data.
2. The method of claim 1, wherein generating a configuration file corresponding to each digital mannequin according to the component information specifically comprises:
and generating configuration files corresponding to the digital human models according to the component information and preset colors corresponding to the tissues.
3. The method of claim 1, wherein importing each digital mannequin into a simulation environment based on the configuration file, comprises:
Converting the format of the digital human body model into a target format matched with the simulation environment for each digital human body model;
and importing the digital human body model in the target format into the simulation environment, and configuring the digital human body model through the configuration file.
4. The method of claim 3, wherein the target format comprises: a tetrahedral mesh format;
the configuration file is used for configuring the digital human body model, and the configuration file specifically comprises the following steps:
and endowing the grids corresponding to each organization in the tetrahedral grid-format digital human body model with material properties through the component information contained in the configuration file.
5. The method of claim 1, wherein prior to importing each digital mannequin into the simulation environment, the method further comprises:
and constructing virtual image equipment in the simulation environment, and setting at least one parameter of detector size, detector resolution, image pixel size, radioactive source particle type, radioactive source energy distribution, radioactive source energy range, detector and radioactive source position and a physical process model of the virtual image equipment.
6. The method according to claim 1, wherein for each digital phantom imported into the simulation environment, data acquisition is performed around the digital phantom by a virtual imaging device preset in the simulation environment, and projection data of the digital phantom at various angles is determined, specifically comprising:
and according to the set acquisition period and the set acquisition frequency, carrying out image acquisition around the digital human body model through the virtual image equipment, and determining projection data of the digital human body model at all angles.
7. The method of claim 1, wherein processing the projection data to obtain target image data in an imaging format corresponding to the virtual image device specifically includes:
reconstructing three-dimensional image data corresponding to the digital human body model according to the projection data at each angle;
and converting the data format of the three-dimensional image data into an imaging format corresponding to the virtual image equipment to obtain the target image data.
8. The method according to claim 7, wherein reconstructing three-dimensional image data corresponding to the digital phantom from the projection data at the respective angles, specifically comprises:
Synthesizing the projection data at each angle into a three-dimensional data matrix of the projection data;
weighting the three-dimensional data matrix, and filtering the weighted data to obtain filtered data;
and carrying out interpolation and back projection processing on the filtered data to obtain the reconstructed three-dimensional image data.
9. The method of claim 7, wherein converting the data format of the three-dimensional image data into the imaging format corresponding to the virtual image device, to obtain the target image data, specifically comprises:
and slicing the three-dimensional image data according to preset section display parameters to obtain image data corresponding to each section, wherein the image data is used as the target image data.
10. The method of claim 9, wherein the display parameters include: at least one of a section size, a section resolution, a sampling accuracy, a section center, and a section rotation angle.
11. The method of claim 9, wherein slicing the three-dimensional image data to obtain image data corresponding to each slice, specifically comprises:
and slicing the three-dimensional image data to obtain a corresponding section image of the digital human body model on at least one section of an axial plane, a sagittal plane and a coronal plane.
12. The method of claim 1, wherein the method further comprises:
training the medical image auxiliary diagnosis model through the training sample;
and deploying the trained medical image auxiliary diagnosis model, and inputting the target image data into the medical image auxiliary diagnosis model after the target image data of the user are acquired, so as to determine a diagnosis result aiming at the user through the medical image auxiliary diagnosis model.
13. A training sample generation apparatus for medical image-assisted diagnosis, comprising:
the system comprises a construction module, a control module and a control module, wherein the construction module is used for constructing a digital human body model of at least one physiological attribute based on human body structures of different human bodies and setting pathological tissues in at least part of the digital human body model, and the physiological attribute is used for representing at least one of age, sex and health condition of the digital human body model;
the determining module is used for determining component information of different tissues in each digital human body model;
the importing module generates configuration files corresponding to the digital human models according to the component information and imports the digital human models into a simulation environment based on the configuration files;
The acquisition module is used for acquiring data around each digital human model of the simulation environment by a virtual image device preset in the simulation environment, and determining projection data of the digital human model at all angles;
the generation module is used for processing the projection data to obtain target image data in an imaging format corresponding to the virtual image equipment, and generating a training sample for training the medical image auxiliary diagnosis model according to the target image data.
14. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-12.
15. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-12 when executing the program.
CN202310528602.3A 2023-05-10 2023-05-10 Training sample generation method and device for medical image aided diagnosis Active CN116563246B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310528602.3A CN116563246B (en) 2023-05-10 2023-05-10 Training sample generation method and device for medical image aided diagnosis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310528602.3A CN116563246B (en) 2023-05-10 2023-05-10 Training sample generation method and device for medical image aided diagnosis

Publications (2)

Publication Number Publication Date
CN116563246A true CN116563246A (en) 2023-08-08
CN116563246B CN116563246B (en) 2024-01-30

Family

ID=87487446

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310528602.3A Active CN116563246B (en) 2023-05-10 2023-05-10 Training sample generation method and device for medical image aided diagnosis

Country Status (1)

Country Link
CN (1) CN116563246B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117038064A (en) * 2023-10-07 2023-11-10 之江实验室 Evaluation method, device, storage medium and equipment for auxiliary analysis algorithm

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090078487A (en) * 2008-01-15 2009-07-20 (주)온디맨드소프트 3/4-dimensional ultrasound scanning simulator and its simulation method for training purpose
US20100179428A1 (en) * 2008-03-17 2010-07-15 Worcester Polytechnic Institute Virtual interactive system for ultrasound training
US20150347682A1 (en) * 2011-10-04 2015-12-03 Quantant Technology Inc. Remote cloud based medical image sharing and rendering semi-automated or fully automated, network and/or web-based, 3d and/or 4d imaging of anatomy for training, rehearsing and/or conducting medical procedures, using multiple standard x-ray and/or other imaging projections, without a need for special hardware and/or systems and/or pre-processing/analysis of a captured image data
CN109658400A (en) * 2018-12-14 2019-04-19 首都医科大学附属北京天坛医院 A kind of methods of marking and system based on head CT images
CN111276032A (en) * 2020-02-29 2020-06-12 中山大学中山眼科中心 Virtual operation training system
CN113112400A (en) * 2021-05-07 2021-07-13 深圳追一科技有限公司 Model training method and model training device
CN113239972A (en) * 2021-04-19 2021-08-10 温州医科大学 Artificial intelligence auxiliary diagnosis model construction system for medical images
WO2022165809A1 (en) * 2021-02-07 2022-08-11 华为技术有限公司 Method and apparatus for training deep learning model
WO2022176813A1 (en) * 2021-02-17 2022-08-25 富士フイルム株式会社 Learning device, learning method, learning device operation program, training data generation device, machine learning model and medical imaging device
CN115472051A (en) * 2022-08-25 2022-12-13 南通大学 Medical student operation simulation dummy and use method
CN115760858A (en) * 2023-01-10 2023-03-07 东南大学附属中大医院 Kidney pathological section cell identification method and system based on deep learning

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20090078487A (en) * 2008-01-15 2009-07-20 (주)온디맨드소프트 3/4-dimensional ultrasound scanning simulator and its simulation method for training purpose
US20100179428A1 (en) * 2008-03-17 2010-07-15 Worcester Polytechnic Institute Virtual interactive system for ultrasound training
US20150347682A1 (en) * 2011-10-04 2015-12-03 Quantant Technology Inc. Remote cloud based medical image sharing and rendering semi-automated or fully automated, network and/or web-based, 3d and/or 4d imaging of anatomy for training, rehearsing and/or conducting medical procedures, using multiple standard x-ray and/or other imaging projections, without a need for special hardware and/or systems and/or pre-processing/analysis of a captured image data
CN109658400A (en) * 2018-12-14 2019-04-19 首都医科大学附属北京天坛医院 A kind of methods of marking and system based on head CT images
CN111276032A (en) * 2020-02-29 2020-06-12 中山大学中山眼科中心 Virtual operation training system
WO2022165809A1 (en) * 2021-02-07 2022-08-11 华为技术有限公司 Method and apparatus for training deep learning model
WO2022176813A1 (en) * 2021-02-17 2022-08-25 富士フイルム株式会社 Learning device, learning method, learning device operation program, training data generation device, machine learning model and medical imaging device
CN113239972A (en) * 2021-04-19 2021-08-10 温州医科大学 Artificial intelligence auxiliary diagnosis model construction system for medical images
CN113112400A (en) * 2021-05-07 2021-07-13 深圳追一科技有限公司 Model training method and model training device
CN115472051A (en) * 2022-08-25 2022-12-13 南通大学 Medical student operation simulation dummy and use method
CN115760858A (en) * 2023-01-10 2023-03-07 东南大学附属中大医院 Kidney pathological section cell identification method and system based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
方琼;: "数字化虚拟人体在医学中的应用进展(综述)", 安徽卫生职业技术学院学报, no. 01 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117038064A (en) * 2023-10-07 2023-11-10 之江实验室 Evaluation method, device, storage medium and equipment for auxiliary analysis algorithm
CN117038064B (en) * 2023-10-07 2024-01-09 之江实验室 Evaluation method, device, storage medium and equipment for auxiliary analysis algorithm

Also Published As

Publication number Publication date
CN116563246B (en) 2024-01-30

Similar Documents

Publication Publication Date Title
Segars et al. Realistic CT simulation using the 4D XCAT phantom
WO2018119766A1 (en) Multi-modal image processing system and method
Cheng et al. A morphing-Based 3D point cloud reconstruction framework for medical image processing
US9192301B2 (en) Radiological simulation
CN116563246B (en) Training sample generation method and device for medical image aided diagnosis
Segars et al. Extension of the 4D NCAT phantom to dynamic x-ray CT simulation
Benyó Identification of dental root canals and their medial line from micro-CT and cone-beam CT records
Advincula et al. Development and future trends in the application of visualization toolkit (VTK): the case for medical image 3D reconstruction
Denisova et al. Development of anthropomorphic mathematical phantoms for simulations of clinical cases in diagnostic nuclear medicine
Badano et al. The stochastic digital human is now enrolling for in silico imaging trials—methods and tools for generating digital cohorts
CN107004269A (en) The segmentation based on model to anatomical structure
Nassef et al. Extraction of human mandible bones from multi-slice computed tomographic data
Zhang Virtual reality technology
Khodajou-Chokami et al. Data fusion approach for constructing unsupervised augmented voxel-based statistical anthropomorphic phantoms
CN114340496A (en) Analysis method and related device of heart coronary artery based on VRDS AI medical image
JP2021168788A (en) Medical image processing device and medical image processing method
Bert et al. Monte Carlo simulations for medical and biomedical applications
Zhang et al. Optimal modeling and simulation of the relationship between athletes' high-intensity training and sports injuries
CN117038064B (en) Evaluation method, device, storage medium and equipment for auxiliary analysis algorithm
Baum et al. Design of a multiple component geometric breast phantom
FI129810B (en) Apparatus, method and computer program for processing computed tomography (ct) scan data
Chernoglazov Tools for visualising mars spectral ct datasets
CN111028328B (en) Simulation imaging method based on Unity3D acquisition data
König Usability issues in 3D medical visualization
Danping Development of Three Dimensional Volumetric Rendering in Medical Image Application

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant