CN105078514A - Construction method and device of three-dimensional model, image monitoring method and device - Google Patents

Construction method and device of three-dimensional model, image monitoring method and device Download PDF

Info

Publication number
CN105078514A
CN105078514A CN201410162954.2A CN201410162954A CN105078514A CN 105078514 A CN105078514 A CN 105078514A CN 201410162954 A CN201410162954 A CN 201410162954A CN 105078514 A CN105078514 A CN 105078514A
Authority
CN
China
Prior art keywords
image
tissue
model
dimentional
monitoring image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410162954.2A
Other languages
Chinese (zh)
Inventor
文银刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CHONGQING HAIFU MEDICAL TECHNOLOGY Co Ltd
Original Assignee
CHONGQING HAIFU MEDICAL TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CHONGQING HAIFU MEDICAL TECHNOLOGY Co Ltd filed Critical CHONGQING HAIFU MEDICAL TECHNOLOGY Co Ltd
Priority to CN201410162954.2A priority Critical patent/CN105078514A/en
Priority to PCT/CN2015/075087 priority patent/WO2015161728A1/en
Publication of CN105078514A publication Critical patent/CN105078514A/en
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Ultra Sonic Daignosis Equipment (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

The invention discloses a construction method and device of a three-dimensional model, and an image monitoring method and device.The construction method of the three-dimensional model comprises following steps of: 1), acquiring a monitoring image (real-time two-dimensional ultrasound pattern) of tissue in a monitoring area during treatment of a patient in a real-time manner and acquiring a diagnostic image (acquired two-dimensional ultrasound/MR/CT image prior to treatment) having feature information similar to the monitoring image of tissue from an anatomical model of the patient; 2), registering the monitoring image and the diagnostic image and establishing a registration relation in order to obtain a two-dimensional registration image; and 3), constructing the registered three-dimensional model based on the two-dimensional registration image.The construction method and device of the three-dimensional model and the image monitoring method and device have following beneficial effects: higher registration between the constructed three-dimensional model and the monitoring image during treatment of the patient is realized; and clear tissue sides and tissue texture information can be shown at the same time in order to help doctors to carry out ultrasonic positioning and monitoring operation.

Description

The construction method of threedimensional model and device, image monitoring method and device
Technical field
The invention belongs to field of medical technology, relate to a kind of construction method and threedimensional model construction device, image monitoring method and image monitor of threedimensional model.
Background technology
In the HIFU non-invasive therapy and pin class minimally-invasive treatment process of MR/CT guiding ultrasonography monitoring, doctor adopts and ultrasonicly positions target location and monitor, but the feature of ultrasonoscopy (two dimensional image) is that organizational boundary is obvious, and tissue texture is very fuzzy, MR/CT guide ultrasonography monitoring be exactly for patient ultrasonic/MR/CT image carries out three-dimensional reconstruction, the three-dimensional registration of position when realizing threedimensional model and patient according to various registration technique, then the MR/CT cutting image that the ultrasonic scanning face that obtains is corresponding, provides tissue texture information clearly to doctor.But, carry out three-dimensional reconstruction ultrasonic/the MR/CT image image that to be patient obtain when carrying out MR/CT and checking, and there is larger change sometimes between the position of the position of patient's in-vivo tissue organ when carrying out MR/CT and checking and patient in-vivo tissue organ when carrying out HIFU non-invasive therapy or pin class minimally-invasive treatment, if MR/CT image when directly adopting inspection is as monitoring image during treatment, then because each tissue on itself and real-time ultrasonic image cannot realize registration, therefore ultrasonography monitoring cannot be guided.
In addition, at present on medical image three-dimensional reconfiguration technique, general employing carries out color and transparency configuration to Scalar (scalar) value ranges different in MR/CT image, finally realizes virtual visualization effect.For the different tissues that Scalar value in MR/CT image differs greatly, this method can realize independent display and the border differentiation of different tissues fast, and for the different less different tissues of Scalar value difference in MR/CT image (tissue location such as such as abdominal uterine, inner membrance, muscular tumor range boundary and prostate), adopt the threedimensional model of the method reconstruct due to the bounds of different tissues obviously cannot be distinguished, thus auto Segmentation cannot be carried out to tissue.
At Medical Imaging, also have the medical image data anatomical model of a kind of " digital human body ", on digital human body adjustable different tissues display, hide, mobile and rotate.Scholar is had to propose to adopt digital human body to guide ultrasonography monitoring, but the data source of digital human body is generally adopt digital camera to carry out section shooting to corpse model, carry out three-dimensionalreconstruction in a computer just whole anthropometric dummy to be set up out, if set up the anatomic tissue model of a certain specific part, also need manually or automatically different tissues to be delineated out, again different tissues is matched colors, this anatomic tissue model just can be obtained after three-dimensionalreconstruction, and position relationship between different tissues can be adjusted respectively, but this model does not generally have tissue texture information.Digital human body as general character model, can only can not provide the specific characteristics information of patient simultaneously.So be used for guiding ultrasonography monitoring HIFU non-invasive therapy and pin class minimally-invasive treatment by digital human body, exist and can not adapt to clinical limitation.
In addition, guide in the clinical practice of localization by ultrasonic, monitoring in MR/CT three-dimensional, very important change in location is there occurs during ultrasonography monitoring when the tissue on MR/CT image and treatment, in order to solve the change in location that these intracorporeal organs occur, need to carry out artificial movement or rotation process to each organ-tissue.
Summary of the invention
Technical problem to be solved by this invention is for the deficiencies in the prior art, a kind of construction method of threedimensional model, threedimensional model construction device and image monitoring method, image monitor are provided, monitoring image when constructed threedimensional model can be treated with sufferer realizes higher registration, show organizational boundary and tissue texture information clearly simultaneously, carry out localization by ultrasonic and monitoring with assist physician.
The technical scheme solving the employing of the technology of the present invention problem is the construction method of this threedimensional model, comprises the steps:
1) the monitoring image of guarded region inner tissue when Real-time Obtaining sufferer is treated, described monitoring image is two-dimensional ultrasonic image, and the diagnosis imaging similar to the characteristic information organized in described monitoring image is obtained from the anatomical model of sufferer, described diagnosis imaging is two-dimensional ultrasound/MR/CT image;
2) described monitoring image and described diagnosis imaging are carried out registration, set up registration relation, to obtain two-dimentional registering images;
3) threedimensional model is built according to described two-dimentional registering images.
Preferably, step 1) in, the step obtaining the diagnosis imaging similar to the characteristic information organized in described monitoring image from the anatomical model of sufferer is:
11) from described anatomical model, isolate the tissue three-dimensional model of described guarded region inner tissue;
12) from described tissue three-dimensional model, the diagnosis imaging similar to the characteristic information organized in described monitoring image is obtained.
Further preferably, in step 11) in, the step isolating the tissue three-dimensional model of described guarded region inner tissue from described anatomical model is specially:
111) in the image setting up described anatomical model, sketch out the organizational boundary of described guarded region inner tissue;
112) take described in delineate image information in scope, to obtain the 3 d surface model of tissue; And the tissue texture information of delineating described in extracting in scope, to obtain the three-D grain model of tissue;
113) described 3 d surface model and three-D grain model group are combined, namely form the tissue three-dimensional model of described tissue.
Preferably, in step 111) in, the concrete steps that the image setting up described anatomical model sketches out the organizational boundary of described guarded region inner tissue are: carry out auto Segmentation by image processing apparatus to the tissue in described image, tentatively to sketch out the organizational boundary of described tissue, manually adjust delineating inaccurate position in the described organizational boundary tentatively sketched out, the organizational boundary obtained after adjustment is the organizational boundary of described tissue again.
Wherein, the image setting up described anatomical model can be MR image, CT image, ultrasonoscopy, ultrasonic contrast image or ultrasonic doppler image.
Preferably, in step 1) in, the monitoring image obtaining guarded region inner tissue during sufferer treatment specifically obtains the monitoring image of sagittal plain and the monitoring image of cross-section position of guarded region inner tissue when sufferer is treated, correspondingly, the diagnosis imaging of the diagnosis imaging of the similar sagittal plain of the characteristic information organized in the monitoring image that the diagnosis imaging similar to the characteristic information organized in described monitoring image specifically obtain to described sagittal plain from the anatomical model of the sufferer cross-section position similar with the characteristic information obtained to organize in the monitoring image of described cross-section position is obtained from the anatomical model of sufferer,
In step 2) in, described monitoring image and described diagnosis imaging are carried out registration, set up registration relation, to obtain the step of two-dimentional registering images be: described monitoring image is adjusted to consistent with the described co-ordinate system location being used for the diagnosis imaging of reconstructing three-dimensional model, then adjust according to the coordinate position organized in the monitoring image of described sagittal plain the coordinate position organized in the diagnosis imaging of the sagittal plain corresponding to it, make the coordinate position of the two overlap to carry out the registration between the two, and finally obtain the two-dimentional registering images of sagittal plain; The coordinate position organized in monitoring image again according to described cross-section position adjusts the coordinate position organized in the diagnosis imaging of the cross-section position corresponding to it, make the coordinate position of the two overlap to carry out the registration between the two, and finally obtain the two-dimentional registering images of cross-section position;
In step 3) in, according to the step that described two-dimentional registering images builds threedimensional model be: construct described threedimensional model according to the two-dimentional registering images of obtained sagittal plain and the two-dimentional registering images of cross-section position.
Further preferably, the coordinate position organized in monitoring image according to described sagittal plain/cross-section position adjusts the coordinate position organized in the diagnosis imaging of the sagittal plain/cross-section position corresponding to it, the coordinate position of the two is overlapped to carry out the concrete steps of the registration between the two be: in the monitoring image of described sagittal plain/cross-section position, select some specific location point in tissue as the first index point, then in the diagnosis imaging of described sagittal plain/cross-section position, find at corresponding with described first index point o'clock as the second index point, the coordinate position of the second index point is adjusted to and overlaps with the coordinate position of described first index point, to carry out the registration between the two.
The present invention also provides a kind of image monitoring method, comprises the steps:
1) obtain or build above-mentioned threedimensional model;
2) when treating according to sufferer, the monitoring image of the guarded region inner tissue of Real-time Obtaining obtains the diagnosis imaging consistent with the characteristic information organized in described monitoring image in described threedimensional model, this diagnosis imaging is two-dimentional cutting image, using this two-dimentional cutting image as the navigation picture being used to guide treatment.
Preferably, in step 2) after also include:
3) described monitoring image and described two-dimentional cutting image are merged, form two-dimentional fusion image, using this two-dimentional fusion image as the navigation picture being used to guide treatment.
The present invention also provides a kind of threedimensional model construction device, comprises the first acquiring unit, second acquisition unit, registration unit and three-dimensional construction unit, wherein:
First acquiring unit, the monitoring image of guarded region inner tissue when treating for Real-time Obtaining sufferer, described monitoring image is two-dimensional ultrasonic image, and by described monitoring image output to registration unit;
Second acquisition unit, for obtaining the diagnosis imaging similar to the characteristic information organized in described monitoring image from the anatomical model of sufferer, described diagnosis imaging is two-dimensional ultrasound/MR/CT image, and exports described diagnosis imaging to registration unit;
Registration unit, for described monitoring image and described diagnosis imaging are carried out registration, to obtain two-dimentional registering images, and exports the described two-dimentional registering images obtained to three-dimensional construction unit;
Three-dimensional construction unit, for constructing threedimensional model according to described two-dimentional registering images.
Preferably, described second acquisition unit comprises input block, separative element and processing unit, wherein:
Input block, for receive that user chooses in the image setting up described anatomical model treat with sufferer time tissue regions scope corresponding to guarded region inner tissue, and to the separative element of exporting;
Separative element, for isolating the tissue three-dimensional model corresponding with the tissue in described tissue regions scope according to described tissue regions scope from described anatomical model, then exports described tissue three-dimensional model to processing unit;
Processing unit, obtains the diagnosis imaging similar to tissue signature's information in described monitoring image for cutting described tissue three-dimensional model.
Further preferably, described separative element comprises that module is delineated by organizational boundary, 3 d surface model takes module, three-D grain model extraction module and tissue three-dimensional Model Reconstruction module, wherein:
Module is delineated by organizational boundary, for receiving the tissue regions scope that input block exports, and in the image setting up described anatomical model, sketches out the organizational boundary organized in described tissue regions scope according to described tissue regions scope;
3 d surface model takes module, for delineating the image information in scope described in taking, to obtain the 3 d surface model of tissue, then exports described 3 d surface model to tissue three-dimensional Model Reconstruction module;
Three-D grain model extraction module, for delineating the tissue texture information in scope described in extracting, to obtain the three-D grain model of tissue, then exports described three-D grain model to tissue three-dimensional Model Reconstruction module;
Tissue three-dimensional Model Reconstruction module, for described 3 d surface model and three-D grain model group are combined, thus the tissue three-dimensional model of formative tissue, and export described tissue three-dimensional model to processing unit.
The present invention also provides a kind of image monitor, comprises the 3rd acquiring unit, the 4th acquiring unit and display unit, wherein:
3rd acquiring unit, the monitoring image of guarded region inner tissue when treating for Real-time Obtaining sufferer, described monitoring image is two-dimensional ultrasonic image, and by described monitoring image output to the 4th acquiring unit;
4th acquiring unit, for obtaining the diagnosis imaging consistent with the characteristic information organized in described monitoring image from described threedimensional model construction device, exports described diagnosis imaging to display unit as two-dimentional cutting image;
Display unit, for showing the described two-dimentional cutting image received.
Preferably, this image monitor also includes integrated unit, wherein:
The described monitoring image output extremely described integrated unit of described 3rd acquiring unit also for obtaining, described 4th acquiring unit is also for exporting the two-dimentional cutting image obtained to described integrated unit;
Integrated unit, for described monitoring image and described two-dimentional cutting image are merged to form two-dimentional fusion image, and exports described two-dimentional fusion image to display unit;
Described display unit is also for showing the described two-dimentional fusion image received.
The present invention carry out HIFU treatment and pin class minimally-invasive treatment time by setting up a threedimensional model accurately for each patient, the tissue location relation organized in described threedimensional model can with reach accurate registration during patient, organizational boundary and tissue texture information clearly can be shown simultaneously, thus localization by ultrasonic and monitoring can be carried out by assist physician.
Multiple automatic and automanual tissue extraction algorithm is added in the present invention, interested tissue is delineated out by automatic and automanual algorithm by the MR/CT image of every one deck, carry out three-dimensionalreconstruction respectively and just obtain organized tissue three-dimensional model, and independent movement and the Spin Control of tissue three-dimensional model can be realized, according to the ultrasonography monitoring organizational boundary information that patient is real-time, different tissues can be placed on accurately in three-dimensional space position, and accurately control the position relationship of itself and adjacent tissue.
Image monitoring method of the present invention is a kind of ameliorative way to medical image three-dimensional visualization, and the raising method of three-dimensional navigation model accuracy, the method is specially adapted to guide in ultrasonography monitoring HIFU non-invasive therapy and pin class minimally-invasive treatment at MR/CT, the situation that during tissue location relation and patient for the MR/CT 3 D anatomical model existed in current medical image, in-vivo tissue position is inconsistent, and relate to the unsharp problem of organizational boundary of MR/CT 3 D anatomical model at the similar position of tissue characteristics (such as the tissue location such as abdominal uterine and prostate), the present invention all can solve.
Effective effect of the present invention is specific as follows:
1. the threedimensional model constructed by the present invention according to sufferer tissue texture clearly medical image (comprise ultrasonic, MR, CT image etc.) set up, therefore this threedimensional model Zhong Ge organizational boundary obviously and have tissue texture information clearly, can the HIFU treatment of accurate navigation ultrasonography monitoring and pin class minimally-invasive treatment, thus can solve existing ultrasonic/the navigate HIFU of ultrasonography monitoring of MR/CT treats and the inadequate problem of threedimensional model accuracy in pin class minimally-invasive treatment;
2. the position in the threedimensional model constructed by between each tissue is adjustable, in the treatment by the position relationship between each tissue of adjustment, until the tissue location relation consistent with during treatment, finally realizes the ultrasonography monitoring of threedimensional model navigation accurately;
3. the monitoring image when threedimensional model constructed by the present invention can be treated with sufferer realizes higher registration, thus can improve accuracy and the safety of threedimensional model navigation;
4. can solve the problem cannot carrying out auto Segmentation in MR/CT threedimensional model because of organizational boundary not obvious (position that the tissue characteristics difference such as such as hysteromyoma is less) to tissue, the specific tissue information that simultaneously sufferer can be provided complete;
5. the present invention only can delineate a tissue of interest, three-dimensional reconstruction goes out to have the organize models of texture information, adjust whole organize models by mobile and rotation more consistent with the position of organizing during patient in vivo, realize single organization's model and guide localization by ultrasonic, monitoring; The present invention also can delineate multiple tissue of interest simultaneously, and reconstruct the organize models with texture information respectively, the spatial relation of each organize models is adjusted again according to the ultrasonography monitoring image of patient during treatment, final consistent with the position of organizing during patient in vivo, realize multi-tissue model and guide localization by ultrasonic, monitoring, the generation of complication when avoiding treating.
Visible, the present invention guides in the clinical practice of localization by ultrasonic, monitoring in MR/CT three-dimensional, for the tissue on MR/CT image and the situation that there occurs very important change in location during treatment during ultrasonography monitoring, the present invention can realize the registration between each tissue on real-time ultrasonic image, thus accurately guides ultrasonography monitoring and treatment.
Accompanying drawing explanation
Fig. 1 is the flow chart of image monitoring method of the present invention;
Fig. 2 is the flow chart in the embodiment of the present invention 2, image being carried out to semi-automatic tissue segmentation;
Fig. 3 is segmentation effect exemplary plot when carrying out semi-automatic tissue segmentation to image in the embodiment of the present invention 2;
Fig. 4 is the flow chart carrying out registration in the embodiment of the present invention 2;
Fig. 5 is the exemplary plot of in the embodiment of the present invention 2, tissue being carried out to cross-section position, sagittal plain registration;
Fig. 6 is the view of the threedimensional model in uterus after registration in the embodiment of the present invention 2;
Fig. 7 is the flow chart forming two-dimentional fusion image in the embodiment of the present invention 3.
Detailed description of the invention
For making those skilled in the art understand technical scheme of the present invention better, below in conjunction with the drawings and specific embodiments, the present invention is described in further detail.
Principle of the present invention is as follows: in order to improve the precision of the threedimensional model (i.e. anatomical model) of existing navigation, make each tissue of interest (such as uterus, inner membrance and muscular tumor) realize registration, first the tissue three-dimensional model of required tissue can be partitioned into from the threedimensional model of existing navigation, pass through position and the angle of the tissue three-dimensional model of movement and each tissue of rotary fine adjustment again, the position relationship of each tissue real-time in patient when enabling the tissue three-dimensional model of this tissue and treat realizes registration, simultaneously owing to remaining with the tissue texture information of tissue itself in each tissue three-dimensional model, therefore finally can set up and obtain threedimensional model personalized accurately, using the threedimensional model that this threedimensional model navigates in therapeutic process.
Embodiment 1:
The present embodiment provides a kind of construction method of threedimensional model, comprises the steps:
Step 1: the monitoring image of guarded region inner tissue during the treatment of Real-time Obtaining sufferer, described monitoring image is two-dimensional ultrasonic image, and the diagnosis imaging similar to the characteristic information organized in described monitoring image (two-dimensional ultrasonic image) is obtained from the 3 D anatomical model of sufferer, described diagnosis imaging is two-dimensional ultrasound/MR/CT image;
Step 2: by described monitoring image (two-dimensional ultrasonic image) with carry out registration, to obtain two-dimentional registering images for the described diagnosis imaging (two-dimensional ultrasound/MR/CT image) building threedimensional model;
Step 3: build threedimensional model according to described two-dimentional registering images.
Wherein, the characteristic information of described tissue similar mainly refer to tissue size, border motif, the information such as distribution of blood vessel is similar in tissue.The similar determination of this characteristic information specifically can be made according to the tissue signature of various image by the doctor with image Professional knowledge.
Monitoring image described in the present invention is two-dimensional ultrasound image, hereinafter described identical, after no longer illustrate; Diagnosis imaging described in the present invention can be the image such as two-dimensional ultrasound, MR, CT, hereinafter described identical, after no longer illustrate.
In described anatomical model, owing to having organizational boundary and tissue texture information clearly between each tissue, therefore, the organizational boundary of each tissue (comprising the tissues such as abdominal uterine, inner membrance, muscular tumor range boundary and prostate) in the threedimensional model obtained by said method obviously and have tissue texture information clearly, and due to real-time position when it is treated to sufferer corresponding, therefore, it is possible to it come accurate navigation ultrasonography monitoring HIFU treatment and pin class minimally-invasive treatment.
Embodiment 2:
As shown in Figure 1, the present embodiment provides a kind of construction method of threedimensional model, comprises the steps:
Step 1: the monitoring image of guarded region inner tissue during the treatment of Real-time Obtaining sufferer, described monitoring image is two-dimensional ultrasonic image, and the diagnosis imaging similar to the characteristic information organized in described monitoring image is obtained from the anatomical model of sufferer, described diagnosis imaging is two-dimensional ultrasound/MR/CT image.
In the present embodiment, carry out the monitoring image of guarded region inner tissue when Real-time Obtaining sufferer is treated particular by ultrasonic probe.
Wherein, the anatomical model used in the present embodiment is set up by the diagnosis imaging of patient's recent (couple of days such as before treatment), therefore, itself and sufferer is treated time lesion tissue information basically identical, but because sufferer position there occurs change, the position relationship (relative position in such as uterus and bladder) therefore between each tissue there occurs larger change.
Particularly, in this step, the step obtaining the diagnosis imaging similar to the characteristic information organized in described monitoring image (two-dimensional ultrasonic image) from the anatomical model of sufferer comprises:
Step 11: the tissue three-dimensional model isolating described guarded region inner tissue from described anatomical model.
Particularly, in the present embodiment, step 11 comprises:
Step 111: the organizational boundary sketching out described guarded region inner tissue in the image setting up described anatomical model.
Wherein, the image for setting up described anatomical model can for MR image, CT image, ultrasonoscopy, ultrasonic contrast image or ultrasonic doppler image.
The process being set up anatomical model by above-mentioned image belongs to prior art, does not carry out describing herein.
Preferably, the concrete steps of delineating can be: first carry out auto Segmentation by image processing apparatus to the tissue set up in the image of described anatomical model, tentatively to sketch out the organizational boundary of described tissue, manually adjust delineating inaccurate position in the described organizational boundary tentatively sketched out, the organizational boundary obtained after adjustment is the organizational boundary of described tissue again.
In the diagnosis imaging of some tissue site (such as abdominal uterine position), due to each tissue corresponding ultrasonic/MR/CT value is more close, the organizational boundary (such as the organizational boundary of uterus, inner membrance and muscular tumor) of different tissues cannot be told intuitively by conventional three-dimensional reconstruction method, need by assisting the organizational boundary's scope sketching out different tissues (such as uterus and muscular tumor) in conjunction with manual segmentation mode, can be reconstructed each so that follow-up and delineated the model of tissue three-dimensional accurately of tissue.
That is, above-mentioned this partitioning scheme can be a kind of semi-automatic segmentation mode, and so-called semi-automatic segmentation refers to the partitioning scheme that auto Segmentation and manual segmentation combine.
Wherein, as shown in Figure 2,3, for each the layer diagnosis image set up in the image of described anatomical model, first tissue of interest (VOI-volume-of-interest can be selected with mouse, such as tissue of interest can be uterus, muscular tumor and inner membrance) rectangular area (as shown in a view of Fig. 3), described tissue of interest should be sufferer treatment time guarded region in tissue; Image processing apparatus (tissue segmentation device) is by organizational boundary's partitioning algorithm (such as LevelSet partitioning algorithm, this is prior art) automatically identify that (segmentation) goes out the organizational boundary of this tissue, in order to shorten sliced time, organizational boundary can produce multiple CCP automatically with organizational boundary's (as shown in b view of Fig. 3) of fast and easy adjustment tissue; For ensureing segmentation precision, judge in organizational boundary to described automatic generation that inaccurate position can by adjusting organizational boundary's (being realized by the position of the CCP in adjustment organizational boundary) of tissue manually, organizational boundary's (as shown in c view of Fig. 3) of rear tissue so only just can be adjusted by a small amount of CCP of adjustment, whole organizational boundary can utilize these CCPs, adopts B-spline interpolation algorithm to calculate.
Carry out the flow process split and segmentation effect figure specifically can see Fig. 3.Wherein, the selection range of the white dashed line rectangle frame in a view of Fig. 3 represents the interested tissue regions scope that user chooses, red closed curve in the b view of Fig. 3 is the organizational boundary of the tissue that image processing apparatus is gone out by organizational boundary's partitioning algorithm auto Segmentation, and the red closed curve in the c view of Fig. 3 is the organizational boundary by the tissue after adjusting manually b view.
Step 112: delineate the image information in scope described in taking, to obtain the 3 d surface model of tissue; And the tissue texture information of delineating described in extracting in scope, to obtain the three-D grain model of tissue.
After the organizational boundary (i.e. the organizational boundary of described guarded region inner tissue) of the tissue of the tissue of interest in each aspect is accurately delineated out, take the MR/CT image information to delineate in scope again, and the organizational boundary that all MR/CT aspects are delineated is carried out resurfacing by MarchingCubes algorithm (this is prior art) in three dimensions, obtain three-dimensional tissue surface, thus the MR/CT said three-dimensional body in the scope of delineating can be reconstructed out, and different colours and transparency can be set to the three-dimensional tissue surface obtained, so that identify, 3 d surface model can be formed by the way.
Because the organizational boundary of only delineating tissue can not meet the demand building the threedimensional model navigated in the present invention, also need the tissue texture information of every tomographic image being delineated in scope to extract.Therefore, also need the tissue texture information within the scope of the organizational boundary that all MR/CT aspects are delineated to be rebuild by ray cast (ray-casting, this is prior art) algorithm realization voxel, to obtain three-D grain model simultaneously.
Step 113: described 3 d surface model and three-D grain model group are combined, namely forms the tissue three-dimensional model of described tissue.
Because the tissue in guarded region can be one or more, if being organized as in guarded region is multiple, after the tissue three-dimensional model of multiple tissue can being separated separately, distinguish the position relationship of each tissue three-dimensional model of registration again, subject to registration good after, then these tissue three-dimensional model group are synthesized a large tissue three-dimensional model.Certainly, also can first not be separated and reconfigure, but described multiple tissue is considered as a tissue greatly, and build the threedimensional model of this large tissue, in follow-up step of registration, the diagnosis imaging of each the little tissue three-dimensional model in monitoring image and described large tissue three-dimensional model is carried out registration respectively.
Step 12: obtain the diagnosis imaging similar to the characteristic information organized in described monitoring image from described tissue three-dimensional model.
Preferably, step 12 comprises: faced by the ultrasonic scanning of the simulation generated by ultrasonography monitoring equipment, described tissue three-dimensional model cuts; Described tissue three-dimensional model is manually adjusted, to obtain the diagnosis imaging similar to the characteristic information organized in described monitoring image from tissue three-dimensional model according to the present position, ultrasonic scanning face of described simulation and angle.
In step 12, with the ultrasonic scanning face of simulating (according to scanning position and the sweep limits generation of ultrasonography monitoring equipment, cutting planes for the redness cut and organized as shown in Figure 6 is the ultrasonic scanning face of simulation, is called for short ultrasonic scanning face) cut and organize threedimensional model to obtain the diagnosis imaging similar to the characteristic information organized in described monitoring image.
Particularly, estimate the scanning position of described ultrasonic scanning face in described tissue three-dimensional model according to described monitoring image, again described ultrasonic scanning face is adjusted to the corresponding Plane Location of described tissue three-dimensional model, that is, by described ultrasonic scanning in the face of described tissue three-dimensional model carries out cutting to obtain corresponding diagnosis imaging, specifically manually adjust position and the angle in described ultrasonic scanning face according to described monitoring image, thus the diagnosis imaging similar to the characteristic information organized in described monitoring image can be obtained.
Preferably, the step 1 of the present embodiment) in, the monitoring image obtaining guarded region inner tissue during sufferer treatment specifically obtains the monitoring image of sagittal plain and the monitoring image of cross-section position of guarded region inner tissue when sufferer is treated, correspondingly, the diagnosis imaging of the diagnosis imaging of the similar sagittal plain of the characteristic information organized in the monitoring image that the diagnosis imaging similar to the characteristic information organized in described monitoring image specifically obtain to described sagittal plain from the anatomical model of the sufferer cross-section position similar with the characteristic information obtained to organize in the monitoring image of described cross-section position is obtained from the anatomical model of sufferer.
Step 2: described monitoring image and described diagnosis imaging are carried out registration, sets up registration relation, to obtain two-dimentional registering images.
As shown in Figure 4, preferably, in the present embodiment, described monitoring image and described diagnosis imaging being carried out registration to obtain the step of two-dimentional registering images is: human space position corresponding respectively to described monitoring image and described diagnosis imaging is adjusted to consistent, then the locus of tissue in the diagnosis imaging of the sagittal plain corresponding to it is adjusted according to the locus of tissue in the monitoring image of described sagittal plain, by translation and rotation, the locus of tissue in the two is overlapped substantially and one show the registration carried out between the two, and finally obtain the two-dimentional registering images of sagittal plain, with adjust the locus of organizing in the diagnosis imaging of the cross-section position corresponding to it according to the locus of organizing in the monitoring image of described cross-section position, by translation and rotation, the coordinate position of the two is overlapped substantially and one show the registration carried out between the two, set up registration relation, and finally obtain the two-dimentional registering images of cross-section position.Namely the two-dimentional registering images of described sagittal plain and the two-dimentional registering images of cross-section position form described two-dimentional registering images.
Can the scanning position (such as carrying out repeated transformation between the sagittal plain and cross-section position of sufferer) in repeated transformation ultrasonic scanning face, with the registration of the described monitoring image and described diagnosis imaging that realize different scanning direction, to obtain described two-dimentional registering images.
Wherein, the detection of sagittal plain is carried out by ultrasonic probe, the monitoring image of the sagittal plain that when can obtain medical treatment, patient is real-time, the diagnosis imaging of corresponding sagittal plain can be obtained according to tissue three-dimensional model simultaneously, then both are carried out registration, carry out registration by the monitoring image of sagittal plain and the diagnosis imaging of sagittal plain; Equally, the detection of cross-section position is carried out by ultrasonic probe, the monitoring image of the cross-section position that patient is real-time when can obtain medical treatment, the diagnosis imaging of corresponding cross-section position can be obtained according to tissue three-dimensional model simultaneously, then both are carried out registration, carry out registration by the monitoring image of cross-section position and the diagnosis imaging of cross-section position, finally can obtain the threedimensional model of registration.
That is, in the present embodiment, being the described registration relation according to setting up in isolated tissue three-dimensional model and step 2 from anatomical model in step 11, obtaining the two-dimentional registering images of chorista.
Specifically, as shown in Figure 5, needing of showing in this figure is organized as uterus in the guarded region of registration.First when the sagittal plain ultrasonic scanning face generated in ultrasonic probe being navigated to sufferer is treated with Real-time Obtaining sufferer, the monitoring image (as shown in the c view of Fig. 5) of uterus sagittal plain obtains the locus of the monitoring image of uterus sagittal plain simultaneously, the ultrasonic scanning face of the sagittal plain of the simulation for cutting and organizing threedimensional model is generated according to monitoring image and locus, go with the ultrasonic scanning face of the sagittal plain of simulation to cut the tissue three-dimensional model taking the uterus from anatomical model, after obtaining the diagnosis imaging (as shown in the d view of Fig. 5) of the sagittal plain cut out, again according to tissue signature's information (magnitude range in such as uterus) in uterus in the monitoring image of the real-time uterus sagittal plain of sufferer in the c view of Fig. 5, the diagnosis imaging of uterus sagittal plain in the d view of manual adjustment (automatically can certainly be adjusted by relevant device) Fig. 5, namely adjust and organize deflection angle in three-dimensional tissue's aspect and aspect, after treating the Image Adjusting of sagittal plain, the monitoring image (as shown in a view of Fig. 5) of cross-section position, uterus and locus thereof when the cross-section position again ultrasonic probe being navigated to sufferer is treated with Real-time Obtaining sufferer, and then obtain the ultrasonic scanning face of the cross-section position of simulating, the ultrasonic scanning face of the cross-section position of same simulation is gone to cut the tissue three-dimensional model taking the uterus from anatomical model, after obtaining the diagnosis imaging (as shown in the b view of Fig. 5) of cross-section position, again according to tissue signature's information in uterus in the monitoring image of the real-time cross-section position, uterus of sufferer in a view of Fig. 5, the diagnosis imaging of cross-section position, uterus in the b view of manual adjustment (automatically can certainly be adjusted by relevant device) Fig. 5, namely adjust and organize deflection angle in three-dimensional tissue's aspect and aspect.So repeatedly adjust, like this, by repeatedly carrying out the registration of image in the sagittal plain and cross-section position of sufferer, in-vivo tissue position relationship real-time when the spatial relationship that finally can realize taking from anatomical model the tissue three-dimensional model is treated with sufferer is consistent, after co-registration by whole registration relation locking, namely obtain the two-dimentional registering images after registration (just not needing again to adjust if patient posture remains unchanged), thus realize three-dimensional registration.
Preferably, when carrying out registration, some specific location point in tissue can be selected as the first index point P1 (x in the monitoring image of described sagittal plain/cross-section position, y, z), then in the diagnosis imaging of described sagittal plain/cross-section position, find at corresponding with described first index point o'clock as the second index point P2 (x, y, z), now, P1 and P2 is not probably same coordinate points, then the side-play amount of point-to-point transmission is calculated, dx=P2.x-P1.x, dy=P2.y-P1.y, dz=P2.z-P1.z, according to the side-play amount dx in xyz tri-directions, dy, dz, by adjustment and the locus of mobile diagnostics image, P2 and P1 2 is allowed to overlap on locus, both are finally allowed to overlap by represented tissue, to carry out the registration between the two.
Such as can in the monitoring image of abdominal uterine chooser cervix uteri, inner membrance long end 2 o'clock is respectively as the first index point P1 (20, 30, 10), then in diagnosis imaging, cervix uteri is found respectively, inner membrance long end 2 o'clock is as the second index point P2 (30, 10, 40), calculate side-play amount and obtain dx=30-20=10, dy=10-30=-20, dz=40-10=30, according to these three side-play amounts, the locus of diagnosis imaging is moved 10 in xyz tri-direction respectively,-20, 30, just can make the cervix uteri in the monitoring image in uterus and diagnosis imaging, inner membrance long end overlaps on locus, and then obtain with the monitoring Image registration in uterus after the diagnosis imaging of two dimension, thus the registration that can realize between the two, uterine cancer cell in the anatomical model of such sufferer just achieves registration with uterine cancer cell during sufferer real-time treatment.
Step 3: build threedimensional model according to described two-dimentional registering images.
In the present embodiment, be reconstruct the threedimensional model after registration according to the two-dimentional registering images of obtained sagittal plain and the two-dimentional registering images of cross-section position.
First respectively the two-dimentional registering images of the chorista of many levels and organizational boundary are carried out three-dimensional reconstruction, obtain 3 d surface model and the three-D grain model of described chorista, can finally obtain the threedimensional model after registration according to 3 d surface model and three-D grain model.On this basis, according to the locus of the real-time monitoring image of guarded region inner tissue, obtain the ultrasonic scanning face of simulating, utilize 3 d surface model and said three-dimensional body model faced by this ultrasonic scanning to cut, the real-time diagnosis image of the uterus position of sufferer can be obtained.
The a view of Fig. 6 is the view of the threedimensional model of intact uteri after registration, now 3 d surface model can only be seen, the b view of Fig. 6 is the view of the threedimensional model in the uterus that the ultrasonic scanning face of registration tailing edge simulation is cut open, now can see threedimensional model be cut open after tissue image.
Just two-dimentional cutting image accurately can be cut out in each position on the threedimensional model in described uterus, using the monitoring image as the navigation in treatment by the position converting ultrasonic scanning face.
Because the tissue when threedimensional model that obtains in the present embodiment and patient has registration relation accurately, thus can effectively assisting ultrasonic monitoring, improve the safety for the treatment of.
Embodiment 3:
The present embodiment provides a kind of image monitoring method, comprises the steps:
Step 1: the described threedimensional model obtaining or obtain in constructed embodiment 2;
Step 2: when treating according to sufferer, the monitoring image of the guarded region inner tissue of Real-time Obtaining obtains the diagnosis imaging consistent with the characteristic information organized in described monitoring image in described threedimensional model, this diagnosis imaging is two-dimentional cutting image, using this two-dimentional cutting image as the navigation picture being used to guide treatment.
Described two-dimentional cutting image specifically cuts out tissue texture information along virtual ultrasonic scanning aspect according to described monitoring image in described threedimensional model, with the two-dimentional cutting image demonstrated in specific window.
Preferably, this image monitoring method also can comprise the following steps 3.
Step 3: described monitoring image and described two-dimentional cutting image are merged, forms two-dimentional fusion image, using this two-dimentional fusion image as the navigation picture being used to guide treatment.
By the two-dimentional clearly cutting image obtained and the real-time monitoring image of sufferer being merged, the texture information of ultrasonoscopy can be strengthened.
As shown in Figure 7, the MR/CT cutting image that two-dimentional fusion image specifically can obtain from threedimensional model merges according to two kinds of modes the step described monitoring image and described two-dimentional cutting image merged, and namely false colour merges and gray scale fusion.By carrying out false colour fusion, the image of different modalities can be arranged to different colors; By carrying out gray scale fusion, the image of different modalities different gray scale integration percentages be can be set to, the tissue signature and the location pathological tissues therapeutic domain that accurately judge B ultrasonic scanning aspect be further conducive to like this.
Embodiment 4:
The present embodiment provides a kind of threedimensional model construction device, comprises the first acquiring unit, second acquisition unit, registration unit and three-dimensional construction unit, wherein:
First acquiring unit, the monitoring image of guarded region inner tissue when treating for Real-time Obtaining sufferer, described monitoring image is two-dimensional ultrasonic image, and by described monitoring image output to registration unit;
Second acquisition unit, for obtaining the diagnosis imaging similar to the characteristic information organized in described monitoring image from the anatomical model of sufferer, described diagnosis imaging is two-dimensional ultrasound/MR/CT image, and exports described diagnosis imaging to registration unit;
Registration unit, for described monitoring image and described diagnosis imaging are carried out registration, to obtain two-dimentional registering images, and exports the described two-dimentional registering images obtained to three-dimensional construction unit;
Three-dimensional construction unit, for constructing threedimensional model according to described two-dimentional registering images.
Wherein, the characteristic information of described tissue similar mainly refer to tissue size, border motif, the information such as distribution of blood vessel is similar in tissue.The similar determination of this characteristic information specifically can be made according to the tissue signature of various image by the doctor with image Professional knowledge.
Preferably, described second acquisition unit comprises input block, separative element and processing unit, wherein:
Input block, for receive that user chooses in the image setting up described anatomical model treat with sufferer time tissue regions scope corresponding to guarded region inner tissue, and to the separative element of exporting;
Separative element, for isolating the tissue three-dimensional model corresponding with the tissue in described tissue regions scope according to described tissue regions scope from described anatomical model, then exports described tissue three-dimensional model to processing unit;
Processing unit, obtains the diagnosis imaging similar to tissue signature's information in described monitoring image for cutting described tissue three-dimensional model.
Preferably, described separative element comprises that module is delineated by organizational boundary, 3 d surface model takes module, three-D grain model extraction module and tissue three-dimensional Model Reconstruction module, wherein:
Module is delineated by organizational boundary, for receiving the tissue regions scope that input block exports, and in the image setting up described anatomical model, sketches out the organizational boundary organized in described tissue regions scope according to described tissue regions scope;
3 d surface model takes module, for delineating the image information in scope described in taking, to obtain the 3 d surface model of tissue, then exports described 3 d surface model to tissue three-dimensional Model Reconstruction module;
Three-D grain model extraction module, for delineating the tissue texture information in scope described in extracting, to obtain the three-D grain model of tissue, then exports described three-D grain model to tissue three-dimensional Model Reconstruction module;
Tissue three-dimensional Model Reconstruction module, for described 3 d surface model and three-D grain model group are combined, thus the tissue three-dimensional model of formative tissue, and export described tissue three-dimensional model to processing unit.
Further, module is delineated also for receiving adjusting the organizational boundary that it sketches out of user's input by described organizational boundary, and using the organizational boundary that organizes in described tissue regions scope of organizational boundary after adjusting;
Described processing unit comprises cutting module, and described cutting module for generating ultrasonic scanning face, and is undertaken cutting to obtain diagnosis imaging by the tissue three-dimensional model that described ultrasonic scanning exports in the face of tissue three-dimensional Model Reconstruction module.
Embodiment 5:
The present embodiment provides a kind of image monitor, comprises the 3rd acquiring unit, the 4th acquiring unit and display unit, wherein:
3rd acquiring unit, the monitoring image of guarded region inner tissue when treating for Real-time Obtaining sufferer, described monitoring image is two-dimensional ultrasonic image, and by described monitoring image output to the 4th acquiring unit;
4th acquiring unit, for obtaining the diagnosis imaging consistent with the characteristic information organized in described monitoring image from the threedimensional model construction device described in embodiment 4, described diagnosis imaging is two-dimensional ultrasound/MR/CT image, to the display unit of exporting as two-dimentional cutting image;
Display unit, for showing the described two-dimentional cutting image received.
Preferably, this image monitor also comprises the threedimensional model construction device described in embodiment 4.Described threedimensional model construction device is connected with described 4th acquiring unit, for sending the 4th acquiring unit to by building the threedimensional model obtained.
Preferably, this image monitor also includes integrated unit, wherein:
The described monitoring image output extremely described integrated unit of described 3rd acquiring unit also for obtaining, described 4th acquiring unit is also for exporting the two-dimentional cutting image obtained to described integrated unit;
Integrated unit, for described monitoring image and described two-dimentional cutting image are merged to form two-dimentional fusion image, and exports described two-dimentional fusion image to display unit;
Described display unit is also for showing the described two-dimentional fusion image received.
Be understandable that, the illustrative embodiments that above embodiment is only used to principle of the present invention is described and adopts, but the present invention is not limited thereto.For those skilled in the art, without departing from the spirit and substance in the present invention, can make various modification and improvement, these modification and improvement are also considered as protection scope of the present invention.

Claims (12)

1. a construction method for threedimensional model, comprises the steps:
1) the monitoring image of guarded region inner tissue when Real-time Obtaining sufferer is treated, described monitoring image is two-dimensional ultrasonic image, and the diagnosis imaging similar to the characteristic information organized in described monitoring image is obtained from the anatomical model of sufferer, described diagnosis imaging is two-dimensional ultrasound/MR/CT image;
2) described monitoring image and described diagnosis imaging are carried out registration, set up registration relation, to obtain two-dimentional registering images;
3) threedimensional model is built according to described two-dimentional registering images.
2. method according to claim 1, is characterized in that, step 1) in, the step obtaining the diagnosis imaging similar to the characteristic information organized in described monitoring image from the anatomical model of sufferer is:
11) from described anatomical model, isolate the tissue three-dimensional model of described guarded region inner tissue;
12) from described tissue three-dimensional model, the diagnosis imaging similar to the characteristic information organized in described monitoring image is obtained.
3. method according to claim 2, is characterized in that, in step 11) in, the step isolating the tissue three-dimensional model of described guarded region inner tissue from described anatomical model is specially:
111) in the image setting up described anatomical model, sketch out the organizational boundary of described guarded region inner tissue;
112) take described in delineate image information in scope, to obtain the 3 d surface model of tissue; And the tissue texture information of delineating described in extracting in scope, to obtain the three-D grain model of tissue;
113) described 3 d surface model and three-D grain model group are combined, namely form the tissue three-dimensional model of described tissue.
4. method according to claim 3, is characterized in that, the image setting up described anatomical model is MR image, CT image, ultrasonoscopy, ultrasonic contrast image or ultrasonic doppler image.
5., according to the method one of claim 1-4 Suo Shu, it is characterized in that,
In step 1) in, the monitoring image obtaining guarded region inner tissue during sufferer treatment specifically obtains the monitoring image of sagittal plain and the monitoring image of cross-section position of guarded region inner tissue when sufferer is treated, correspondingly, the diagnosis imaging of the diagnosis imaging of the similar sagittal plain of the characteristic information organized in the monitoring image that the diagnosis imaging similar to the characteristic information organized in described monitoring image specifically obtain to described sagittal plain from the anatomical model of the sufferer cross-section position similar with the characteristic information obtained to organize in the monitoring image of described cross-section position is obtained from the anatomical model of sufferer,
In step 2) in, described monitoring image and described diagnosis imaging are carried out registration, set up registration relation, to obtain the step of two-dimentional registering images be: described monitoring image is adjusted to consistent with the described co-ordinate system location being used for the diagnosis imaging of reconstructing three-dimensional model, then adjust according to the coordinate position organized in the monitoring image of described sagittal plain the coordinate position organized in the diagnosis imaging of the sagittal plain corresponding to it, make the coordinate position of the two overlap to carry out the registration between the two, and finally obtain the two-dimentional registering images of sagittal plain; The coordinate position organized in monitoring image again according to described cross-section position adjusts the coordinate position organized in the diagnosis imaging of the cross-section position corresponding to it, make the coordinate position of the two overlap to carry out the registration between the two, and finally obtain the two-dimentional registering images of cross-section position;
In step 3) in, according to the step that described two-dimentional registering images builds threedimensional model be: construct described threedimensional model according to the two-dimentional registering images of obtained sagittal plain and the two-dimentional registering images of cross-section position.
6. an image monitoring method, comprises the steps:
1) obtain or build the threedimensional model described in any one of claim 1-5;
2) when treating according to sufferer, the monitoring image of the guarded region inner tissue of Real-time Obtaining obtains the diagnosis imaging consistent with the characteristic information organized in described monitoring image in described threedimensional model, this diagnosis imaging is two-dimentional cutting image, using this two-dimentional cutting image as the navigation picture being used to guide treatment.
7. image monitoring method according to claim 6, is characterized in that, in step 2) after also include:
3) described monitoring image and described two-dimentional cutting image are merged, form two-dimentional fusion image, using this two-dimentional fusion image as the navigation picture being used to guide treatment.
8. a threedimensional model construction device, is characterized in that, comprises the first acquiring unit, second acquisition unit, registration unit and three-dimensional construction unit, wherein:
First acquiring unit, the monitoring image of guarded region inner tissue when treating for Real-time Obtaining sufferer, described monitoring image is two-dimensional ultrasonic image, and by described monitoring image output to registration unit;
Second acquisition unit, for obtaining the diagnosis imaging similar to the characteristic information organized in described monitoring image from the anatomical model of sufferer, described diagnosis imaging is two-dimensional ultrasound/MR/CT image, and exports described diagnosis imaging to registration unit;
Registration unit, for described monitoring image and described diagnosis imaging are carried out registration, to obtain two-dimentional registering images, and exports the described two-dimentional registering images obtained to three-dimensional construction unit;
Three-dimensional construction unit, for constructing threedimensional model according to described two-dimentional registering images.
9. device according to claim 8, is characterized in that,
Described second acquisition unit comprises input block, separative element and processing unit, wherein:
Input block, for receive that user chooses in the image setting up described anatomical model treat with sufferer time tissue regions scope corresponding to guarded region inner tissue, and to the separative element of exporting;
Separative element, for isolating the tissue three-dimensional model corresponding with the tissue in described tissue regions scope according to described tissue regions scope from described anatomical model, then exports described tissue three-dimensional model to processing unit;
Processing unit, obtains the diagnosis imaging similar to tissue signature's information in described monitoring image for cutting described tissue three-dimensional model.
10. device according to claim 9, is characterized in that, described separative element comprises that module is delineated by organizational boundary, 3 d surface model takes module, three-D grain model extraction module and tissue three-dimensional Model Reconstruction module, wherein:
Module is delineated by organizational boundary, for receiving the tissue regions scope that input block exports, and in the image setting up described anatomical model, sketches out the organizational boundary organized in described tissue regions scope according to described tissue regions scope;
3 d surface model takes module, for delineating the image information in scope described in taking, to obtain the 3 d surface model of tissue, then exports described 3 d surface model to tissue three-dimensional Model Reconstruction module;
Three-D grain model extraction module, for delineating the tissue texture information in scope described in extracting, to obtain the three-D grain model of tissue, then exports described three-D grain model to tissue three-dimensional Model Reconstruction module;
Tissue three-dimensional Model Reconstruction module, for described 3 d surface model and three-D grain model group are combined, thus the tissue three-dimensional model of formative tissue, and export described tissue three-dimensional model to processing unit.
11. 1 kinds of image monitors, is characterized in that, comprise the 3rd acquiring unit, the 4th acquiring unit and display unit, wherein:
3rd acquiring unit, the monitoring image of guarded region inner tissue when treating for Real-time Obtaining sufferer, described monitoring image is two-dimensional ultrasonic image, and by described monitoring image output to the 4th acquiring unit;
4th acquiring unit, for obtaining the diagnosis imaging consistent with the characteristic information organized in described monitoring image from the threedimensional model construction device one of claim 8-10 Suo Shu, exports described diagnosis imaging to display unit as two-dimentional cutting image;
Display unit, for showing the described two-dimentional cutting image received.
12. devices according to claim 11, is characterized in that, this device also includes integrated unit, wherein:
The described monitoring image output extremely described integrated unit of described 3rd acquiring unit also for obtaining, described 4th acquiring unit is also for exporting the two-dimentional cutting image obtained to described integrated unit;
Integrated unit, for described monitoring image and described two-dimentional cutting image are merged to form two-dimentional fusion image, and exports described two-dimentional fusion image to display unit;
Described display unit is also for showing the described two-dimentional fusion image received.
CN201410162954.2A 2014-04-22 2014-04-22 Construction method and device of three-dimensional model, image monitoring method and device Pending CN105078514A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201410162954.2A CN105078514A (en) 2014-04-22 2014-04-22 Construction method and device of three-dimensional model, image monitoring method and device
PCT/CN2015/075087 WO2015161728A1 (en) 2014-04-22 2015-03-26 Three-dimensional model construction method and device, and image monitoring method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410162954.2A CN105078514A (en) 2014-04-22 2014-04-22 Construction method and device of three-dimensional model, image monitoring method and device

Publications (1)

Publication Number Publication Date
CN105078514A true CN105078514A (en) 2015-11-25

Family

ID=54331723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410162954.2A Pending CN105078514A (en) 2014-04-22 2014-04-22 Construction method and device of three-dimensional model, image monitoring method and device

Country Status (2)

Country Link
CN (1) CN105078514A (en)
WO (1) WO2015161728A1 (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106037874A (en) * 2016-05-18 2016-10-26 四川大学 Preparation method of inferior alveolar nerve protection guide plate based on inferior maxilla cystic lesion scaling
CN106448401A (en) * 2016-11-01 2017-02-22 孙丽华 Three-dimensional mold of displaying thyroid mini nodules in ultrasound department
CN106709986A (en) * 2017-03-13 2017-05-24 上海术理智能科技有限公司 Nidus and/or organ modeling method and apparatus used for model body making
WO2017088816A1 (en) * 2015-11-27 2017-06-01 广州聚普科技有限公司 Dti-based method for three-dimensional reconstruction of intracranial nerve fiber bundle
CN107610095A (en) * 2017-08-04 2018-01-19 南京邮电大学 Heart CT coronary artery full-automatic partition methods based on image co-registration
CN109003471A (en) * 2018-09-16 2018-12-14 山东数字人科技股份有限公司 A kind of 3 D human body supersonic anatomy tutoring system and method
WO2019011157A1 (en) * 2017-07-11 2019-01-17 中慧医学成像有限公司 Method for adjusting orthosis
CN109934934A (en) * 2019-03-15 2019-06-25 广州九三致新科技有限公司 A kind of medical image display methods and device based on augmented reality
CN109961436A (en) * 2019-04-04 2019-07-02 北京大学口腔医学院 A kind of median plane construction method based on artificial nerve network model
CN110148208A (en) * 2019-04-03 2019-08-20 中国人民解放军陆军军医大学 A kind of pharynx nasalis radiotherapy teaching mode construction method based on Chinese Digital Human
CN110464380A (en) * 2019-09-12 2019-11-19 李肯立 A kind of method that the ultrasound cross-section image of the late pregnancy period fetus of centering carries out quality control
CN111292248A (en) * 2018-12-10 2020-06-16 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic fusion imaging method and ultrasonic fusion navigation system
CN112617902A (en) * 2020-12-31 2021-04-09 上海联影医疗科技股份有限公司 Three-dimensional imaging system and imaging method
CN112641471A (en) * 2020-12-30 2021-04-13 北京大学第三医院(北京大学第三临床医学院) Bladder capacity determination and three-dimensional shape assessment method and system special for radiotherapy
CN112717281A (en) * 2021-01-14 2021-04-30 重庆翰恒医疗科技有限公司 Medical robot platform and control method
CN113870339A (en) * 2020-06-30 2021-12-31 上海微创电生理医疗科技股份有限公司 Image processing method, image processing device, computer equipment, storage medium and mapping system
CN114025673A (en) * 2019-06-06 2022-02-08 尼松尼克公司 Registration of ultrasound images
CN114820731A (en) * 2022-03-10 2022-07-29 青岛海信医疗设备股份有限公司 CT image and three-dimensional body surface image registration method and related device
CN115148341A (en) * 2022-08-02 2022-10-04 重庆大学附属三峡医院 AI structure delineation method and system based on body position recognition
CN116211353A (en) * 2023-05-06 2023-06-06 北京大学第三医院(北京大学第三临床医学院) Wearable ultrasonic bladder capacity measurement and multi-mode image morphology evaluation system

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109011221A (en) * 2018-09-04 2018-12-18 东莞东阳光高能医疗设备有限公司 A kind of the neutron capture therapy system and its operating method of dosage guidance
CN111402374B (en) * 2018-12-29 2023-05-23 曜科智能科技(上海)有限公司 Multi-path video and three-dimensional model fusion method, device, equipment and storage medium thereof
CN111161399B (en) * 2019-12-10 2024-04-19 上海青燕和示科技有限公司 Data processing method and assembly for generating three-dimensional model based on two-dimensional image
CN111311738B (en) * 2020-03-04 2023-08-11 杭州市第三人民医院 Ureter 3D digital-analog establishing method adopting imaging and data acquisition device thereof
CN111862305A (en) * 2020-06-30 2020-10-30 北京百度网讯科技有限公司 Method, apparatus, and computer storage medium for processing image
CN114973887B (en) * 2022-05-19 2023-04-18 北京大学深圳医院 Interactive display system for realizing ultrasonic image integration by combining multiple modules

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010007919A1 (en) * 1996-06-28 2001-07-12 Ramin Shahidi Method and apparatus for volumetric image navigation
US20050027187A1 (en) * 2003-07-23 2005-02-03 Karl Barth Process for the coupled display of intra-operative and interactively and iteratively re-registered pre-operative images in medical imaging
CN1814323A (en) * 2005-01-31 2006-08-09 重庆海扶(Hifu)技术有限公司 Focusing ultrasonic therapeutical system
CN101057790A (en) * 2007-06-22 2007-10-24 北京长江源科技有限公司 Treating tumor positioning method and device used for high strength focus ultrasonic knife
CN101681504A (en) * 2006-11-27 2010-03-24 皇家飞利浦电子股份有限公司 System and method for fusing real-time ultrasound images with pre-acquired medical images
CN102651145A (en) * 2012-04-06 2012-08-29 哈尔滨工业大学 Three-dimensional femoral model visualization method
CN102999902A (en) * 2012-11-13 2013-03-27 上海交通大学医学院附属瑞金医院 Optical navigation positioning system based on CT (computed tomography) registration results and navigation method thereby
CN103020976A (en) * 2012-12-31 2013-04-03 中国科学院合肥物质科学研究院 Method and system for registering three-dimensional medical images on basis of weighted fuzzy mutual information
CN103295455A (en) * 2013-06-19 2013-09-11 北京理工大学 Ultrasonic training system based on CT (Computed Tomography) image simulation and positioning
CN103366397A (en) * 2012-03-31 2013-10-23 上海理工大学 Spinal column 3D model constructing method based on C-arm 2D projection images
CN103356284A (en) * 2012-04-01 2013-10-23 中国科学院深圳先进技术研究院 Surgical navigation method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9757036B2 (en) * 2007-05-08 2017-09-12 Mediguide Ltd. Method for producing an electrophysiological map of the heart
CN101623198A (en) * 2008-07-08 2010-01-13 深圳市海博科技有限公司 Real-time tracking method for dynamic tumor
CN101869501B (en) * 2010-06-29 2011-11-30 北京中医药大学 Computer-aided needle scalpel positioning system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20010007919A1 (en) * 1996-06-28 2001-07-12 Ramin Shahidi Method and apparatus for volumetric image navigation
US20050027187A1 (en) * 2003-07-23 2005-02-03 Karl Barth Process for the coupled display of intra-operative and interactively and iteratively re-registered pre-operative images in medical imaging
CN1814323A (en) * 2005-01-31 2006-08-09 重庆海扶(Hifu)技术有限公司 Focusing ultrasonic therapeutical system
CN101681504A (en) * 2006-11-27 2010-03-24 皇家飞利浦电子股份有限公司 System and method for fusing real-time ultrasound images with pre-acquired medical images
CN101057790A (en) * 2007-06-22 2007-10-24 北京长江源科技有限公司 Treating tumor positioning method and device used for high strength focus ultrasonic knife
CN103366397A (en) * 2012-03-31 2013-10-23 上海理工大学 Spinal column 3D model constructing method based on C-arm 2D projection images
CN103356284A (en) * 2012-04-01 2013-10-23 中国科学院深圳先进技术研究院 Surgical navigation method and system
CN102651145A (en) * 2012-04-06 2012-08-29 哈尔滨工业大学 Three-dimensional femoral model visualization method
CN102999902A (en) * 2012-11-13 2013-03-27 上海交通大学医学院附属瑞金医院 Optical navigation positioning system based on CT (computed tomography) registration results and navigation method thereby
CN103020976A (en) * 2012-12-31 2013-04-03 中国科学院合肥物质科学研究院 Method and system for registering three-dimensional medical images on basis of weighted fuzzy mutual information
CN103295455A (en) * 2013-06-19 2013-09-11 北京理工大学 Ultrasonic training system based on CT (Computed Tomography) image simulation and positioning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
唐静: "US_CT_MRI图像融合在肝癌HIFU治疗定位中的应用", 《中国优秀博硕士学位论文全文数据库 (硕士) 医药卫生科技辑》 *
徐利建: "超声引导肝脏介入手术中的配准技术", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017088816A1 (en) * 2015-11-27 2017-06-01 广州聚普科技有限公司 Dti-based method for three-dimensional reconstruction of intracranial nerve fiber bundle
CN106037874A (en) * 2016-05-18 2016-10-26 四川大学 Preparation method of inferior alveolar nerve protection guide plate based on inferior maxilla cystic lesion scaling
CN106448401A (en) * 2016-11-01 2017-02-22 孙丽华 Three-dimensional mold of displaying thyroid mini nodules in ultrasound department
CN106709986A (en) * 2017-03-13 2017-05-24 上海术理智能科技有限公司 Nidus and/or organ modeling method and apparatus used for model body making
WO2019011157A1 (en) * 2017-07-11 2019-01-17 中慧医学成像有限公司 Method for adjusting orthosis
CN107610095A (en) * 2017-08-04 2018-01-19 南京邮电大学 Heart CT coronary artery full-automatic partition methods based on image co-registration
CN109003471A (en) * 2018-09-16 2018-12-14 山东数字人科技股份有限公司 A kind of 3 D human body supersonic anatomy tutoring system and method
CN111292248B (en) * 2018-12-10 2023-12-19 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic fusion imaging method and ultrasonic fusion navigation system
CN111292248A (en) * 2018-12-10 2020-06-16 深圳迈瑞生物医疗电子股份有限公司 Ultrasonic fusion imaging method and ultrasonic fusion navigation system
CN109934934A (en) * 2019-03-15 2019-06-25 广州九三致新科技有限公司 A kind of medical image display methods and device based on augmented reality
CN109934934B (en) * 2019-03-15 2023-09-29 广州九三致新科技有限公司 Medical image display method and device based on augmented reality
CN110148208B (en) * 2019-04-03 2023-07-07 中国人民解放军陆军军医大学 Nasopharyngeal radiotherapy teaching model construction method based on Chinese digital person
CN110148208A (en) * 2019-04-03 2019-08-20 中国人民解放军陆军军医大学 A kind of pharynx nasalis radiotherapy teaching mode construction method based on Chinese Digital Human
CN109961436B (en) * 2019-04-04 2021-05-18 北京大学口腔医学院 Median sagittal plane construction method based on artificial neural network model
CN109961436A (en) * 2019-04-04 2019-07-02 北京大学口腔医学院 A kind of median plane construction method based on artificial nerve network model
CN114025673A (en) * 2019-06-06 2022-02-08 尼松尼克公司 Registration of ultrasound images
CN110464380A (en) * 2019-09-12 2019-11-19 李肯立 A kind of method that the ultrasound cross-section image of the late pregnancy period fetus of centering carries out quality control
CN113870339A (en) * 2020-06-30 2021-12-31 上海微创电生理医疗科技股份有限公司 Image processing method, image processing device, computer equipment, storage medium and mapping system
CN112641471A (en) * 2020-12-30 2021-04-13 北京大学第三医院(北京大学第三临床医学院) Bladder capacity determination and three-dimensional shape assessment method and system special for radiotherapy
CN112641471B (en) * 2020-12-30 2022-09-09 北京大学第三医院(北京大学第三临床医学院) Bladder capacity determination and three-dimensional shape assessment method and system special for radiotherapy
CN112617902A (en) * 2020-12-31 2021-04-09 上海联影医疗科技股份有限公司 Three-dimensional imaging system and imaging method
CN112717281B (en) * 2021-01-14 2022-07-08 重庆翰恒医疗科技有限公司 Medical robot platform and control method
CN112717281A (en) * 2021-01-14 2021-04-30 重庆翰恒医疗科技有限公司 Medical robot platform and control method
CN114820731A (en) * 2022-03-10 2022-07-29 青岛海信医疗设备股份有限公司 CT image and three-dimensional body surface image registration method and related device
CN115148341A (en) * 2022-08-02 2022-10-04 重庆大学附属三峡医院 AI structure delineation method and system based on body position recognition
CN116211353A (en) * 2023-05-06 2023-06-06 北京大学第三医院(北京大学第三临床医学院) Wearable ultrasonic bladder capacity measurement and multi-mode image morphology evaluation system
CN116211353B (en) * 2023-05-06 2023-07-04 北京大学第三医院(北京大学第三临床医学院) Wearable ultrasonic bladder capacity measurement and multi-mode image morphology evaluation system

Also Published As

Publication number Publication date
WO2015161728A1 (en) 2015-10-29

Similar Documents

Publication Publication Date Title
CN105078514A (en) Construction method and device of three-dimensional model, image monitoring method and device
Nelson et al. Three-dimensional ultrasound imaging
US7817836B2 (en) Methods for volumetric contouring with expert guidance
JP5460955B2 (en) Coloring of electroanatomical maps to show ultrasound data collection
US9330490B2 (en) Methods and systems for visualization of 3D parametric data during 2D imaging
CN101474075B (en) Navigation system of minimal invasive surgery
CN107067398B (en) Completion method and device for missing blood vessels in three-dimensional medical model
US20110178389A1 (en) Fused image moldalities guidance
EP3298968B1 (en) Method for identification of anatomical landmarks
CN107599412A (en) A kind of three-dimensional modeling method based on institutional framework, system and threedimensional model
EP2926736B1 (en) Apparatus and method for ultrasound image acquisition, generation and display
JP7171168B2 (en) Medical image diagnosis device and medical image processing device
CN107374705A (en) A kind of lung puncture location of operation method under X-ray and preoperative CT guiding
Robb 3-dimensional visualization in medicine and biology
CN109620404A (en) The method and its system of kidney segmentation based on medical image
CN107835661A (en) Ultrasonoscopy processing system and method and its device, supersonic diagnostic appts
CN110072467A (en) The system of the image of guidance operation is provided
Yavariabdi et al. Mapping and characterizing endometrial implants by registering 2D transvaginal ultrasound to 3D pelvic magnetic resonance images
CN107204045A (en) Virtual endoscope system based on CT images
Shen et al. Transrectal ultrasound image-based real-time augmented reality guidance in robot-assisted laparoscopic rectal surgery: a proof-of-concept study
CN106028943A (en) Ultrasonic virtual endoscopic imaging system and method, and apparatus thereof
CN108143501B (en) Anatomical projection method based on body surface vein features
WO2016079209A1 (en) Method and system for volume rendering of medical images
RU2736800C1 (en) Method for preparation and performing of surgical operation on small pelvis organs
Hopp et al. Automatic multimodal 2D/3D image fusion of ultrasound computer tomography and x-ray mammography for breast cancer diagnosis

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20151125