WO2015161728A1 - Procédé et dispositif de construction de modèle tridimensionnel, et procédé et dispositif et procédé de surveillance d'image - Google Patents

Procédé et dispositif de construction de modèle tridimensionnel, et procédé et dispositif et procédé de surveillance d'image Download PDF

Info

Publication number
WO2015161728A1
WO2015161728A1 PCT/CN2015/075087 CN2015075087W WO2015161728A1 WO 2015161728 A1 WO2015161728 A1 WO 2015161728A1 CN 2015075087 W CN2015075087 W CN 2015075087W WO 2015161728 A1 WO2015161728 A1 WO 2015161728A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
dimensional
tissue
model
monitoring
Prior art date
Application number
PCT/CN2015/075087
Other languages
English (en)
Chinese (zh)
Inventor
文银刚
Original Assignee
重庆海扶医疗科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 重庆海扶医疗科技股份有限公司 filed Critical 重庆海扶医疗科技股份有限公司
Publication of WO2015161728A1 publication Critical patent/WO2015161728A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings

Definitions

  • the invention belongs to the technical field of medical technology, and relates to a method for constructing a three-dimensional model, a three-dimensional model construction device, an image monitoring method and an image monitoring device.
  • MR/CT guided ultrasound monitoring of HIFU non-invasive treatment and minimally invasive treatment of needles doctors use ultrasound to locate and monitor the target location.
  • ultrasound images two-dimensional images
  • MR/CT guided ultrasound monitoring is used to reconstruct the patient's ultrasound/MR/CT images in three dimensions.
  • the three-dimensional registration of the three-dimensional model and the patient's posture is achieved, and then the ultrasound scanning surface is obtained.
  • the corresponding MR/CT cut image provides the doctor with clear tissue texture information.
  • the ultrasound/MR/CT image of the three-dimensional reconstruction is the image obtained by the patient during the MR/CT examination, and the patient performs the HIFU non-invasive treatment or the minimally invasive needle in the body position and the patient during the MR/CT examination.
  • the MR/CT image at the time of examination is directly used as the monitoring image at the time of treatment, since it cannot be registered with each tissue on the real-time ultrasound image, Unable to guide ultrasound monitoring.
  • the digital human body data source generally uses a digital camera to slice the cadaver model.
  • the three-dimensional reconstruction in the computer can establish the whole human body model.
  • the tissue anatomical model of a specific part needs to manually or automatically draw out different tissues, and then color matching different tissues, after three-dimensional reconstruction
  • This tissue anatomical model is available, and the positional relationships between different tissues can be adjusted separately, but the model generally does not have tissue texture information.
  • the digital human body can only serve as a common model and does not provide patient-specific feature information. Therefore, through the use of digital human body to guide ultrasound monitoring of HIFU non-invasive treatment and minimally invasive treatment of needles, there are limitations that cannot be adapted to the clinic.
  • the tissue on the MR/CT image and the ultrasound monitoring during the treatment have a positional change that cannot be ignored.
  • the technical problem to be solved by the present invention is to provide a method for constructing a three-dimensional model, a three-dimensional model building device, an image monitoring method, and an image monitoring device according to the deficiencies of the prior art, and the constructed three-dimensional model can be monitored with the patient during treatment.
  • the image achieves high registration while displaying clear tissue boundaries and tissue texture information to aid physicians in ultrasound localization and monitoring.
  • the technical solution adopted to solve the technical problem of the present invention is a method for constructing the three-dimensional model, comprising the following steps:
  • the monitoring image being a two-dimensional ultrasound image
  • the diagnostic image is a two-dimensional ultrasound/MR/CT image
  • step 1) the step of obtaining a diagnostic image similar to the characteristic information of the tissue in the monitoring image from the anatomical model of the patient is:
  • step 11 the step of separating the three-dimensional tissue model of the tissue in the monitoring area from the anatomical model is specifically:
  • the specific step of delineating the tissue boundary of the tissue in the monitoring area on the image for establishing the anatomical model is: automatically segmenting the tissue in the image by the image processing device, The organization boundary of the organization is initially drawn, and the inaccurate position in the initially outlined tissue boundary is manually adjusted, and the adjusted organizational boundary is the organizational boundary of the organization.
  • the image forming the anatomical model may be an MR image, a CT image, an ultrasound image, an ultrasound contrast image, or an ultrasound Doppler image.
  • step 1) acquiring the monitoring image of the tissue in the monitoring area during the treatment of the patient is specifically acquiring a monitoring image of the sagittal position of the tissue in the monitoring area of the patient during treatment, and a monitoring image of the transverse position, correspondingly, Obtaining a diagnostic image similar to the characteristic information of the tissue in the monitoring image from the anatomical model of the patient is specifically obtaining a sagittal shape similar to the characteristic information of the tissue in the sagittal monitoring image from the anatomical model of the patient a diagnostic image of the bit and a diagnostic image of the transverse position similar to the characteristic information organized in the monitored image of the transverse position;
  • step 2) the monitoring image is registered with the diagnostic image to establish a registration relationship to obtain a two-dimensional registration image: the monitoring image and the three-dimensional model reconstruction are used.
  • the position of the coordinate system of the diagnostic image is adjusted to be consistent, and then the coordinate position of the tissue in the diagnostic image corresponding to the sagittal position is adjusted according to the coordinate position of the tissue in the sagittal surveillance image, so that the coordinate positions of the two are coincident To perform registration between the two, and finally obtain a two-dimensional registration image of the sagittal position; and then adjust the tissue in the diagnostic image corresponding to the transverse position according to the coordinate position of the tissue in the monitoring image of the transverse position Coordinate position, so that the coordinate positions of the two coincide to register between the two, and finally obtain a two-dimensional registration image of the transverse position;
  • step 3 the step of constructing the three-dimensional model according to the two-dimensional registration image is: constructing the three-dimensional model according to the obtained two-dimensional registration image of the sagittal position and the two-dimensional registration image of the transverse position.
  • the coordinate position of the tissue in the diagnostic image corresponding to the sagittal/transverse position is adjusted according to the coordinate position of the tissue in the sagittal/transverse position monitoring image, so that the coordinate positions of the two are coincident
  • the specific steps to perform the registration between the two are: in the sagittal/transverse Selecting a specific position point in the tissue as a first marker point in the monitoring image of the bit, and then finding a point corresponding to the first marker point as a second marker point in the diagnostic image of the sagittal/transverse position And adjusting a coordinate position of the second marker point to coincide with a coordinate position of the first marker point to perform registration between the two.
  • the invention also provides an image monitoring method, comprising the following steps:
  • the diagnostic image in the three-dimensional model that is consistent with the characteristic information of the tissue in the monitoring image is a two-dimensional cutting image
  • the two-dimensional cut image serves as a navigation image for guiding treatment.
  • step 2) there are further included:
  • the invention also provides a three-dimensional model construction device, comprising a first acquisition unit, a second acquisition unit, a registration unit and a three-dimensional construction unit, wherein:
  • a first acquiring unit configured to acquire, in real time, a monitoring image of the tissue in the monitoring area during the treatment of the patient, the monitoring image being a two-dimensional ultrasound image, and outputting the monitoring image to the registration unit;
  • a second acquiring unit configured to acquire, from the anatomical model of the patient, a diagnostic image similar to the feature information of the tissue in the monitoring image, the diagnostic image is a two-dimensional ultrasound/MR/CT image, and the diagnostic image is Output to the registration unit;
  • a registration unit configured to register the monitoring image with the diagnostic image to obtain a two-dimensional registration image, and output the obtained two-dimensional registration image to a three-dimensional construction unit;
  • a three-dimensional building unit configured to construct a three-dimensional model according to the two-dimensional registration image.
  • the second obtaining unit comprises an input unit, a separating unit and a processing unit, wherein:
  • the input unit is configured to receive a range of the tissue region selected by the user in the image of the anatomical model and corresponding to the tissue in the monitoring area during the treatment of the patient, and output the same to the separation unit;
  • a separating unit configured to separate a three-dimensional tissue model corresponding to the tissue in the tissue region range from the anatomical model according to the tissue region range, and output the tissue three-dimensional model to the processing unit;
  • a processing unit configured to cut the tissue three-dimensional model to obtain a diagnostic image similar to the tissue feature information in the monitoring image.
  • the separation unit comprises a tissue boundary delineation module, a three-dimensional surface model excavation module, a three-dimensional texture model extraction module, and a tissue three-dimensional model reconstruction module, wherein:
  • tissue boundary delineation module configured to receive a range of tissue regions output by the input unit, and outline an organization boundary of the organization region in the image of the anatomical model according to the tissue region range;
  • a three-dimensional surface model excavation module configured to excavate image information in the sketched range to obtain a three-dimensional surface model of the tissue, and then output the three-dimensional surface model to the tissue three-dimensional model reconstruction module;
  • a three-dimensional texture model extraction module configured to extract tissue texture information within the sketched range to obtain a three-dimensional texture model of the tissue, and then output the three-dimensional texture model to a tissue three-dimensional model reconstruction module;
  • a three-dimensional model reconstruction module is configured to combine the three-dimensional surface model and the three-dimensional texture model to form a tissue three-dimensional model of the tissue, and output the tissue three-dimensional model to the processing unit.
  • the present invention also provides an image monitoring apparatus, including a third obtaining unit, a fourth obtaining unit, and a display unit, wherein:
  • a third acquiring unit configured to acquire, in real time, a monitoring image of the tissue in the monitoring area during the treatment of the patient, the monitoring image being a two-dimensional ultrasound image, and outputting the monitoring image to the fourth acquiring unit;
  • a fourth acquiring unit configured to acquire a diagnostic image that is consistent with the feature information organized in the monitoring image from the three-dimensional model building device, and output the diagnostic image as a two-dimensional cut image to the display unit;
  • a display unit configured to display the received two-dimensional cut image.
  • the image monitoring device further comprises a fusion unit, wherein:
  • the third acquiring unit is further configured to output the acquired monitoring image to the fusion unit, and the fourth acquiring unit is further configured to output the acquired two-dimensional cut image to the fusion unit;
  • a fusion unit configured to fuse the monitoring image with the two-dimensional cut image to form a two-dimensional fused image, and output the two-dimensional fused image to a display unit;
  • the display unit is further configured to display the received two-dimensional fused image.
  • the present invention establishes an accurate three-dimensional model for each patient during HIFU treatment and minimally invasive treatment of needles, wherein the tissue positional relationship of the tissue in the three-dimensional model can be treated with the patient Accurate registration and clear tissue borders and tissue texture information can be used to assist physicians in ultrasound positioning and monitoring.
  • the method of the invention adds a plurality of automatic and semi-automatic tissue extraction algorithms, and the organizations of interest are delineated by automatic and semi-automatic algorithms on the MR/CT images of each layer, respectively, and three-dimensional reconstruction is performed to obtain all the organizations. Organize the 3D model and realize the individual movement and rotation control of the 3D model. According to the patient's real-time ultrasound monitoring of the tissue boundary information, different tissues can be placed in the accurate 3D spatial position and the position of the adjacent tissue can be accurately controlled. relationship.
  • the image monitoring method of the invention is an improved method for three-dimensional visualization of medical images and an improved method for improving the accuracy of the three-dimensional navigation model, and the method is particularly suitable for use in MR/CT guided ultrasound monitoring HIFU non-invasive treatment and needle minimally invasive treatment.
  • the invention is well solved by the problem that the organizational boundary of the model is not clear.
  • the three-dimensional model constructed by the invention is established according to a medical image with clear texture of the patient's tissue (including ultrasound, MR, CT image, etc.), so that the boundaries of each tissue in the three-dimensional model are clear and have clear tissue texture information.
  • the position between the tissues in the constructed three-dimensional model is adjustable, and the positional relationship between the tissues is adjusted during treatment until the tissue positional relationship is consistent with the treatment, and finally accurate three-dimensional model navigation ultrasound monitoring is realized;
  • the three-dimensional model constructed by the invention can achieve higher registration with the monitoring image of the patient during treatment, thereby improving the accuracy and safety of the three-dimensional model navigation;
  • the MR/CT three-dimensional model cannot be automatically segmented due to the inconspicuous tissue boundary (such as the difference in tissue characteristics such as uterine fibroids), and can provide complete and specific tissue information of the patient. ;
  • the invention can only delineate an organization of interest, reconstruct a tissue model with texture information in three dimensions, and then adjust the whole tissue model by moving and rotating to coincide with the position of the tissue in the patient during treatment, thereby realizing single tissue model guided ultrasound positioning. And monitoring; the invention can also draw a plurality of organizations of interest at the same time, and respectively reconstruct an organization model with texture information, and then according to the treatment.
  • the ultrasound monitoring image of the patient during the treatment adjusts the spatial positional relationship of each tissue model, and finally coincides with the position of the tissue in the body during the treatment of the patient, and realizes the multi-tissue model to guide the ultrasound positioning and monitoring, and avoid complications during treatment.
  • the present invention can realize real-time ultrasound with respect to the situation that the position on the MR/CT image and the ultrasound monitoring during the treatment are not negligible. Registration between the various tissues on the image to accurately guide ultrasound monitoring and treatment.
  • FIG. 1 is a flow chart of an image monitoring method of the present invention
  • FIG. 2 is a flow chart showing semi-automatic tissue segmentation of an image in Embodiment 2 of the present invention
  • FIG. 3 is a view showing an example of a division effect when semi-automatic tissue segmentation is performed on an image in Embodiment 2 of the present invention
  • Figure 5 is a view showing an example of performing transverse and sagittal registration of tissue in the second embodiment of the present invention
  • Figure 6 is a view showing a three-dimensional model of the uterus after registration in Embodiment 2 of the present invention.
  • Fig. 7 is a flow chart showing the formation of a two-dimensional fused image in the third embodiment of the present invention.
  • the principle of the present invention is as follows: in order to improve the accuracy of the existing three-dimensional model for navigation (ie, an anatomical model), registration of each tissue of interest (eg, uterus, intima, and fibroids) can be firstly obtained from existing ones.
  • the three-dimensional model used for navigation divides the tissue 3D model of the desired tissue, and then fine-tunes the position and angle of the tissue 3D model of each tissue by moving and rotating, so that the tissue 3D model of the tissue can be treated in real time with the patient during treatment.
  • This embodiment provides a method for constructing a three-dimensional model, including the following steps:
  • Step 1 Real-time acquisition of monitoring images of tissues in the monitoring area during patient treatment, the monitoring images being two-dimensional ultrasound images, and acquiring from the three-dimensional anatomical model of the patient and organizing the monitoring images (two-dimensional ultrasound images) a diagnostic image having similar feature information, the diagnostic image being a two-dimensional ultrasound/MR/CT image;
  • Step 2 registering the monitoring image (two-dimensional ultrasound image) with the diagnostic image (two-dimensional ultrasound/MR/CT image) for constructing a three-dimensional model to obtain a two-dimensional registration image;
  • Step 3 Construct a three-dimensional model according to the two-dimensional registration image.
  • the similarity of the characteristic information of the tissue mainly refers to the similarity of the size of the tissue, the shape of the boundary, and the distribution of blood vessels in the tissue.
  • the determination of such feature information similarity can be made by a doctor with imaging expertise based on the organizational characteristics of various images.
  • the monitoring images described in the present invention are all two-dimensional ultrasonic images, which are the same as described below, and will not be described later; the diagnostic images described in the present invention may be two-dimensional ultrasound, MR, CT, etc., as described below. The same, will not be explained later.
  • each tissue in the three-dimensional model obtained by the above method includes abdominal uterus, intima, fibroid range boundary, and prostate
  • the tissue boundary of the tissue is obvious and has clear tissue texture information, and because it corresponds to the real-time position of the patient during treatment, it can be used to accurately navigate the ultrasound-monitored HIFU treatment and needle minimally invasive treatment.
  • the embodiment provides a method for constructing a three-dimensional model, including the following steps:
  • Step 1 Real-time acquisition of a monitoring image of the tissue in the monitoring area during the treatment of the patient, the monitoring image being a two-dimensional ultrasound image, and obtaining a diagnostic image similar to the characteristic information of the tissue in the monitoring image from the anatomical model of the patient
  • the diagnostic image is a two-dimensional ultrasound/MR/CT image.
  • the monitoring image of the tissue in the monitoring area during the treatment of the patient is obtained in real time through the ultrasound probe.
  • the anatomical model used in the present embodiment is established by the patient's recent diagnostic image (such as two or three days before treatment), and therefore, it is basically consistent with the tissue lesion information during the treatment of the patient, but due to the disease
  • the position of the patient has changed, so the positional relationship between the tissues (such as the relative position of the uterus and the bladder) has changed a lot.
  • the step of acquiring a diagnostic image similar to the feature information of the tissue in the monitoring image (two-dimensional ultrasound image) from the anatomical model of the patient includes:
  • Step 11 Separating a three-dimensional tissue model of tissue within the monitored area from the anatomical model.
  • step 11 includes:
  • Step 111 Delineate the tissue boundary of the tissue in the monitoring area in the image of the anatomical model.
  • the image used to establish the anatomical model may be an MR image, a CT image, an ultrasound image, an ultrasound contrast image, or an ultrasound Doppler image.
  • the specific step of the delineating may be: automatically dividing the tissue in the image of the anatomical model by the image processing device to initially outline the tissue boundary of the tissue, and then extracting the preliminary outline.
  • the inaccurate position in the organizational boundary is manually adjusted, and the adjusted tissue boundary is the organizational boundary of the organization.
  • the above-mentioned segmentation method may be a semi-automatic segmentation method, and the so-called semi-automatic segmentation refers to a segmentation method in which automatic segmentation and manual segmentation are combined.
  • the mouse may first select the tissue of interest (VOI-volume-of-interest, such as the organization of interest may be Rectangular region of the uterus, fibroids, and intima) (as shown in the a view of Figure 3), the tissue of interest should be the tissue within the monitored area at the time of treatment of the patient; the image processing device (tissue segmentation device) passes through the tissue boundary Segmentation algorithms (such as the Level Set segmentation algorithm, which is prior art) automatically identify (segment) the tissue boundaries of the organization. In order to shorten the segmentation time, multiple key control points are automatically generated at the organization boundary to facilitate rapid adjustment of the organization.
  • tissue of interest such as the organization of interest may be Rectangular region of the uterus, fibroids, and intima
  • the position of the automatically generated tissue boundary can be adjusted manually by manually adjusting the tissue boundary of the organization (by adjusting the key control on the tissue boundary) Point position to achieve), so only by adjusting a small number of key control points
  • the tissue boundaries of the adjusted tissue can be obtained (as shown in the c view of Figure 3), and the entire tissue boundary can be calculated using the B-spline interpolation algorithm using these critical control points.
  • the frame selection range of the white dotted rectangular frame in the a view of FIG. 3 represents the range of the tissue region of interest selected by the user, and the red closed curve in the b view of FIG. 3 is automatically segmented by the image processing device by the tissue boundary segmentation algorithm.
  • the tissue boundary of the tissue, the red closed curve in the c view of Figure 3 is the tissue boundary of the tissue after manual adjustment of the b view.
  • Step 112 Digging image information in the sketched range to obtain a three-dimensional surface model of the tissue; and extracting tissue texture information within the sketched range to obtain a three-dimensional texture model of the tissue.
  • the MR/CT image information within the outlined range is extracted and all MR/CT are extracted.
  • the tissue boundary delineated on the level is reconstructed in the three-dimensional space by the Marching Cubes algorithm (this is a prior art) to obtain a three-dimensional tissue surface, so that the MR/CT three-dimensional body within the sketching range can be reconstructed and obtained.
  • the three-dimensional tissue surface is provided with different colors and transparency for easy identification, and a three-dimensional surface model can be formed by the above method.
  • the texture information of the tissue within the outline of each layer of the image needs to be extracted. Therefore, it is also necessary to reconstruct the tissue texture information in the tissue boundary range delineated on all MR/CT layers by ray-casting (this is a prior art) algorithm to obtain a three-dimensional texture model.
  • Step 113 Combine the three-dimensional surface model and the three-dimensional texture model, that is, form a three-dimensional model of the tissue.
  • the organization in the monitoring area can be one or more, if there are multiple organizations in the monitoring area, the three-dimensional models of the organizations can be separated, and then the positional relationship of the three-dimensional models of each organization is respectively registered. After the registration is completed, the three-dimensional models of the organizations are combined into a large three-dimensional model of the organization.
  • the plurality of tissues are regarded as one large organization, and a three-dimensional model of the large organization is constructed, and the monitoring image and the large organization are monitored in a subsequent registration step.
  • the diagnostic images of the small tissue three-dimensional models in the three-dimensional model are respectively registered.
  • Step 12 Obtain a feature letter organized from the monitoring image from the three-dimensional model of the organization Diagnostic images with similar information.
  • step 12 includes: performing a cutting on the three-dimensional model of the tissue by a simulated ultrasound scan generated by the ultrasound monitoring device; manually adjusting the three-dimensional model of the tissue according to the position and angle of the simulated ultrasound scanning surface to A diagnostic image similar to the feature information organized in the monitored image is acquired in the tissue three-dimensional model.
  • step 12 using a simulated ultrasound scanning surface (generated according to the scanning position and scanning range of the ultrasound monitoring apparatus, the red plane for cutting tissue as shown in FIG. 6 is a simulated ultrasound scanning surface, referred to as an ultrasound scanning surface. And cutting the tissue three-dimensional model to obtain a diagnostic image similar to the feature information of the tissue in the surveillance image.
  • a simulated ultrasound scanning surface generated according to the scanning position and scanning range of the ultrasound monitoring apparatus, the red plane for cutting tissue as shown in FIG. 6 is a simulated ultrasound scanning surface, referred to as an ultrasound scanning surface.
  • the scanning position of the ultrasonic scanning surface in the three-dimensional model of the tissue is estimated according to the monitoring image, and the ultrasonic scanning surface is adjusted to a corresponding level position of the three-dimensional model of the tissue, that is, Cutting the three-dimensional model of the tissue by the ultrasonic scanning to obtain a corresponding diagnostic image, in particular, manually adjusting the position and angle of the ultrasonic scanning surface according to the monitoring image, thereby obtaining and monitoring the image Diagnostic images with similar feature information of the organization.
  • obtaining the monitoring image of the tissue in the monitoring area during the treatment of the patient is specifically acquiring the monitoring image of the sagittal position of the tissue in the monitoring area of the patient during treatment, and the monitoring image of the transverse position.
  • the diagnostic image similar to the characteristic information of the tissue in the monitoring image is obtained from the anatomical model of the patient and is similar to the characteristic information of the tissue in the sagittal monitoring image.
  • a diagnostic image of the sagittal position and a diagnostic image of the transverse position similar to the characteristic information of the tissue in the surveillance image of the transverse position.
  • Step 2 register the monitoring image with the diagnostic image to establish a registration relationship to obtain a two-dimensional registration image.
  • the step of registering the monitoring image with the diagnostic image to obtain a two-dimensional registration image is: respectively: correspondingly matching the monitoring image with the diagnostic image
  • the spatial position of the human body is adjusted to be uniform, and then the spatial position of the human tissue in the corresponding sagittal diagnostic image is adjusted according to the spatial position of the human tissue in the sagittal surveillance image, and both are translated and rotated.
  • the spatial positions of the human tissues are substantially coincident to achieve registration between the two, and finally obtain a two-dimensional registration image of the sagittal position; and adjust the corresponding position according to the spatial position of the tissue in the monitoring image of the transverse position
  • the spatial position of the tissue in the diagnostic image of the transverse position, the coordinate positions of the two are substantially coincident by translation and rotation to perform both The registration between the two sets up a registration relationship and finally obtains a two-dimensional registration image of the transverse position.
  • the two-dimensional registration image of the sagittal position and the two-dimensional registration image of the transverse position constitute the two-dimensional registration image.
  • the scanning position of the ultrasound scanning surface can be repeatedly changed (for example, repeated transformation between the sagittal and transverse positions of the patient) to achieve registration of the monitoring image and the diagnostic image in different scanning directions to obtain A two-dimensional registration image.
  • the sagittal detection by the ultrasonic probe can obtain the real-time sagittal surveillance image of the patient during the treatment, and the corresponding sagittal diagnostic image can be obtained according to the tissue three-dimensional model, and then the two are registered.
  • the sagittal surveillance image is registered with the sagittal diagnostic image; similarly, the ultrasonic probe can be used to detect the cross-sectional position of the patient, and the real-time cross-sectional monitoring image of the patient can be obtained, and the corresponding three-dimensional model can be obtained accordingly.
  • the diagnostic image of the transverse position is then registered, that is, the monitoring image of the transverse position is registered with the diagnostic image of the transverse position, and finally the registered three-dimensional model can be obtained.
  • the two-dimensional registration image of the separated tissue is obtained according to the three-dimensional tissue model separated from the anatomical model in step 11 and the registration relationship established in step 2.
  • the tissue in the monitoring area to be registered shown in the figure is the uterus.
  • the ultrasound scanning surface generated in the ultrasound probe is positioned in the sagittal position of the patient to obtain real-time monitoring images of the sagittal position of the patient during treatment (as shown in the c view of Fig. 5), and at the same time, the sagittal position of the uterus is obtained.
  • the spatial position of the image is monitored, and a simulated sagittal ultrasound scanning surface for cutting the three-dimensional tissue model is generated according to the monitoring image and the spatial position, and the simulated sagittal ultrasound scanning surface is used to cut the uterus excavated from the anatomical model.
  • the tissue three-dimensional model after obtaining the cut sagittal diagnostic image (as shown in the d view of Fig. 5), and then according to the c-view of Fig. 5, the real-time uterine sagittal surveillance image of the patient's uterus tissue Characteristic information (such as the size range of the uterus), manual adjustment (of course, can also be automatically adjusted by the corresponding device) Diagnostic image of the uterus sagittal position in the d view of Fig.
  • the ultrasound probe is positioned to the transverse position of the patient to obtain real-time monitoring images of the uterine transverse position during the treatment of the patient (as shown in the view of Fig. 5) and its space.
  • the uterus tissue characteristics information is manually adjusted (of course, it can also be set accordingly) Automated adjustment)
  • the diagnostic image of the uterine transverse position in the b view of Fig. 5 is to adjust the three-dimensional tissue level and the angle of tissue deflection in the layer.
  • the spatial relationship between the three-dimensional model of the tissue excavated from the anatomical model and the real-time treatment of the patient can be realized by performing image registration on the sagittal and transverse positions of the patient multiple times.
  • the positional relationship of the internal tissues is consistent, and the entire registration relationship is locked after registration, that is, the two-dimensional registration image after registration is obtained (if the patient's body position remains unchanged, no adjustment is needed), thereby realizing three-dimensional registration. .
  • some specific position points in the tissue may be selected as the first landmark point P1(x, y, z) in the monitoring image of the sagittal/transverse position, and then The point corresponding to the first marker point is found in the diagnostic image of the sagittal/transverse position as the second marker point P2(x, y, z). At this time, P1 and P2 are probably not the same coordinate point, and then calculated.
  • the cervix and the long end of the intima can be selected as the first marker point P1 (20, 30, 10) in the surveillance image of the abdominal uterus, and then the cervix and the long end of the intima are respectively found in the diagnostic image.
  • the second marker point P2 (30, 10, 40)
  • the amount of the diagnostic image is moved in the xyz three directions by 10, -20, 30 respectively, so that the monitoring image of the uterus and the long end of the cervix and endometrium in the diagnostic image overlap in the spatial position, thereby obtaining the monitoring of the uterus.
  • the two-dimensional diagnostic image after image registration can achieve the registration between the two, so that the uterus tissue in the patient's anatomical model is registered with the uterus tissue during the real-time treatment of the patient.
  • Step 3 Construct a three-dimensional model according to the two-dimensional registration image.
  • the registered three-dimensional model is reconstructed according to the obtained two-dimensional registration image of the sagittal position and the two-dimensional registration image of the transverse position.
  • the two-dimensional registration image and the tissue boundary of the separated tissue of the multiple layers are three-dimensionally reconstructed, and the three-dimensional surface model and the three-dimensional texture model of the separated tissue are obtained, and the three-dimensional surface model and the three-dimensional texture model can be finally obtained after registration.
  • 3D model On this basis, according to the spatial position of the real-time monitoring image in the monitoring area, a simulated ultrasonic scanning surface is obtained, and the three-dimensional surface model and the three-dimensional body model are cut by the ultrasonic scanning, and the uterus position of the patient can be obtained. Diagnose images in real time.
  • the view a of Figure 6 is a view of the three-dimensional model of the intact uterus after registration, at which point only the three-dimensional surface model can be seen, and the b-view of Figure 6 is a three-dimensional model of the uterus that is cut along the simulated ultrasound scan surface after registration. View, at this point you can see the tissue image after the 3D model is cut.
  • an accurate two-dimensional cut image can be cut at various positions on the three-dimensional model of the uterus as a monitoring image for navigation during treatment.
  • the ultrasound monitoring can be effectively assisted, and the safety of the treatment is improved.
  • This embodiment provides an image monitoring method, including the following steps:
  • Step 1 Obtain or construct the three-dimensional model obtained in Embodiment 2;
  • Step 2 Obtain a diagnostic image in the three-dimensional model that is consistent with the characteristic information of the tissue in the monitoring image according to the monitoring image of the tissue in the monitoring area acquired in real time during the treatment of the patient, and the diagnostic image is a two-dimensional cutting image.
  • the two-dimensional cut image is used as a navigation image for guiding treatment.
  • the two-dimensional cut image is specifically configured to cut tissue texture information along the virtual ultrasonic scan layer in the three-dimensional model according to the monitoring image to display a two-dimensional cut image in a specific window.
  • the image monitoring method may further include the following step 3.
  • Step 3 Fusing the monitoring image with the two-dimensional cut image to form a two-dimensional fused image, and using the two-dimensional fused image as a navigation image for guiding treatment.
  • the texture information of the ultrasound image can be enhanced by fusing the obtained clear two-dimensional cut image with the patient's real-time monitoring image.
  • the step of merging the monitoring image with the two-dimensional cutting image is as shown in FIG. 7.
  • the two-dimensional fused image can be fused by the MR/CT cutting image obtained from the three-dimensional model in two ways, that is, the pseudo color. Fusion and grayscale fusion.
  • By performing false color fusion images of different modalities can be set to different colors; by performing gradation fusion, images of different modalities can be set to different gradation fusion ratios, which is advantageous for further accurate determination of B-scanning
  • the organizational characteristics of the level and the scope of treatment of the diseased tissue are advantageous for further accurate determination of B-scanning.
  • the embodiment provides a three-dimensional model construction device, which includes a first acquisition unit, a second acquisition unit, a registration unit, and a three-dimensional construction unit, wherein:
  • a first acquiring unit configured to acquire, in real time, a monitoring image of the tissue in the monitoring area during the treatment of the patient, the monitoring image being a two-dimensional ultrasound image, and outputting the monitoring image to the registration unit;
  • a second acquiring unit configured to acquire, from the anatomical model of the patient, a diagnostic image similar to the feature information of the tissue in the monitoring image, the diagnostic image is a two-dimensional ultrasound/MR/CT image, and the diagnostic image is Output to the registration unit;
  • a registration unit configured to register the monitoring image with the diagnostic image to obtain a two-dimensional registration image, and output the obtained two-dimensional registration image to a three-dimensional construction unit;
  • a three-dimensional building unit configured to construct a three-dimensional model according to the two-dimensional registration image.
  • the similarity of the characteristic information of the tissue mainly refers to the similarity of the size of the tissue, the shape of the boundary, and the distribution of blood vessels in the tissue.
  • the determination of such feature information similarity can be made by a doctor with imaging expertise based on the organizational characteristics of various images.
  • the second obtaining unit comprises an input unit, a separating unit and a processing unit, wherein:
  • the input unit is configured to receive a range of the tissue region selected by the user in the image of the anatomical model and corresponding to the tissue in the monitoring area during the treatment of the patient, and output the same to the separation unit;
  • a separating unit configured to separate a three-dimensional tissue model corresponding to the tissue in the tissue region range from the anatomical model according to the tissue region range, and output the tissue three-dimensional model to the processing unit;
  • a processing unit configured to cut the tissue three-dimensional model to obtain a diagnostic image similar to the tissue feature information in the monitoring image.
  • the separation unit comprises a tissue boundary delineation module, a three-dimensional surface model excavation module, a three-dimensional texture model extraction module, and a tissue three-dimensional model reconstruction module, wherein:
  • tissue boundary delineation module configured to receive a range of tissue regions output by the input unit, and outline an organization boundary of the organization region in the image of the anatomical model according to the tissue region range;
  • a three-dimensional surface model excavation module configured to excavate image information in the sketched range to obtain a three-dimensional surface model of the tissue, and then output the three-dimensional surface model to the tissue three-dimensional model reconstruction module;
  • a three-dimensional texture model extraction module configured to extract an organization texture letter within the sketched range Information to obtain a three-dimensional texture model of the tissue, and then output the three-dimensional texture model to the tissue three-dimensional model reconstruction module;
  • a three-dimensional model reconstruction module is configured to combine the three-dimensional surface model and the three-dimensional texture model to form a tissue three-dimensional model of the tissue, and output the tissue three-dimensional model to the processing unit.
  • the organization boundary demarcation module is further configured to receive an adjustment made by the user for the organization boundary drawn by the user, and use the adjusted organization boundary as an organizational boundary of the organization in the organization region range;
  • the processing unit includes a cutting module for generating an ultrasound scanning surface, and cutting the tissue three-dimensional model outputted by the tissue three-dimensional model reconstruction module by the ultrasound scanning to obtain a diagnostic image.
  • the embodiment provides an image monitoring apparatus, including a third acquiring unit, a fourth acquiring unit, and a display unit, where:
  • a third acquiring unit configured to acquire, in real time, a monitoring image of the tissue in the monitoring area during the treatment of the patient, the monitoring image being a two-dimensional ultrasound image, and outputting the monitoring image to the fourth acquiring unit;
  • a fourth acquiring unit configured to acquire, from the three-dimensional model building device described in Embodiment 4, a diagnostic image that is consistent with the feature information organized in the monitoring image, the diagnostic image being a two-dimensional ultrasound/MR/CT image, Taking it as a two-dimensional cut image and outputting it to the display unit;
  • a display unit configured to display the received two-dimensional cut image.
  • the image monitoring device further comprises the three-dimensional model constructing device described in Embodiment 4.
  • the three-dimensional model construction device is connected to the fourth acquisition unit, and configured to transmit the constructed three-dimensional model to the fourth acquisition unit.
  • the image monitoring device further comprises a fusion unit, wherein:
  • the third acquiring unit is further configured to output the acquired monitoring image to the fusion unit, and the fourth acquiring unit is further configured to output the acquired two-dimensional cut image to the fusion unit;
  • a fusion unit configured to fuse the monitoring image with the two-dimensional cut image to form a two-dimensional fused image, and output the two-dimensional fused image to a display unit;
  • the display unit is further configured to display the received two-dimensional fused image.

Abstract

La présente invention concerne un procédé et un dispositif de construction de modèle tridimensionnel ainsi qu'un procédé et un dispositif de surveillance d'image. Le procédé de construction de modèle tridimensionnel comprend les étapes de : 1) acquisition d'une image de surveillance (une image ultrasonore bidimensionnelle en temps réel pendant le traitement) d'un tissu à l'intérieur d'une région de surveillance pendant le traitement d'un patient en temps réel, et acquisition d'une image de diagnostic (une image par TDM/RM/ultrasons acquise avant le traitement) similaire à des informations caractéristiques concernant une image de surveillance d'un tissu d'un modèle anatomique du patient; 2) enregistrement de l'image de surveillance et l'image de diagnostic, et établissement d'une relation d'enregistrement pour obtenir une image d'enregistrement bidimensionnelle; et 3) construction d'un modèle tridimensionnel enregistré en fonction de l'image d'enregistrement bidimensionnelle. Le modèle tridimensionnel construit peut réaliser un enregistrement relativement élevé avec l'image de surveillance pendant le traitement du patient, et afficher des informations claires relatives à la limite du tissu et à la texture du tissu en même temps, en vue d'aider un médecin à réaliser une localisation et une surveillance par ultrasons.
PCT/CN2015/075087 2014-04-22 2015-03-26 Procédé et dispositif de construction de modèle tridimensionnel, et procédé et dispositif et procédé de surveillance d'image WO2015161728A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410162954.2A CN105078514A (zh) 2014-04-22 2014-04-22 三维模型的构建方法及装置、图像监控方法及装置
CN201410162954.2 2014-04-22

Publications (1)

Publication Number Publication Date
WO2015161728A1 true WO2015161728A1 (fr) 2015-10-29

Family

ID=54331723

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/075087 WO2015161728A1 (fr) 2014-04-22 2015-03-26 Procédé et dispositif de construction de modèle tridimensionnel, et procédé et dispositif et procédé de surveillance d'image

Country Status (2)

Country Link
CN (1) CN105078514A (fr)
WO (1) WO2015161728A1 (fr)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109011221A (zh) * 2018-09-04 2018-12-18 东莞东阳光高能医疗设备有限公司 一种剂量引导的中子俘获治疗系统及其操作方法
CN110148208A (zh) * 2019-04-03 2019-08-20 中国人民解放军陆军军医大学 一种基于中国数字人的鼻咽部放疗教学模型构建方法
CN111161399A (zh) * 2019-12-10 2020-05-15 盎锐(深圳)信息科技有限公司 基于二维影像生成三维模型的数据处理方法及组件
CN111311738A (zh) * 2020-03-04 2020-06-19 杭州市第三人民医院 一种采用影像学的输尿管3d数模建立方法及其数据采集装置
CN111402374A (zh) * 2018-12-29 2020-07-10 曜科智能科技(上海)有限公司 多路视频与三维模型融合方法及其装置、设备和存储介质
CN111862305A (zh) * 2020-06-30 2020-10-30 北京百度网讯科技有限公司 处理图像的方法、装置和计算机存储介质
CN114973887A (zh) * 2022-05-19 2022-08-30 北京大学深圳医院 一种多模块结合实现超声影像一体化的交互显示系统

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105631930B (zh) * 2015-11-27 2019-09-20 广州迈普再生医学科技股份有限公司 一种基于dti的颅内神经纤维束的三维重建方法
CN106037874A (zh) * 2016-05-18 2016-10-26 四川大学 基于下颌骨囊性病变刮治的下牙槽神经保护导板制备方法
CN106448401B (zh) * 2016-11-01 2019-04-30 孙丽华 一种超声科显示甲状腺微小结节的三维模具
CN106709986B (zh) * 2017-03-13 2020-06-16 上海术理智能科技有限公司 用于模体制作的病灶和/或器官建模方法及装置
CN109223045A (zh) * 2017-07-11 2019-01-18 中慧医学成像有限公司 一种矫形支具的调整方法
CN107610095A (zh) * 2017-08-04 2018-01-19 南京邮电大学 基于图像融合的心脏ct冠脉全自动分割方法
CN109003471A (zh) * 2018-09-16 2018-12-14 山东数字人科技股份有限公司 一种三维人体超声解剖教学系统及方法
CN111292248B (zh) * 2018-12-10 2023-12-19 深圳迈瑞生物医疗电子股份有限公司 超声融合成像方法及超声融合导航系统
CN109934934B (zh) * 2019-03-15 2023-09-29 广州九三致新科技有限公司 一种基于增强现实的医疗图像显示方法及装置
CN109961436B (zh) * 2019-04-04 2021-05-18 北京大学口腔医学院 一种基于人工神经网络模型的正中矢状平面构建方法
GB201908052D0 (en) * 2019-06-06 2019-07-24 Nisonic As Alignment of ultrasound image
CN110464380B (zh) * 2019-09-12 2021-10-29 李肯立 一种对中晚孕期胎儿的超声切面图像进行质量控制的方法
CN113870339A (zh) * 2020-06-30 2021-12-31 上海微创电生理医疗科技股份有限公司 图像处理方法、装置、计算机设备、存储介质及标测系统
CN112641471B (zh) * 2020-12-30 2022-09-09 北京大学第三医院(北京大学第三临床医学院) 一种放疗专用的膀胱容量测定与三维形态评估方法与系统
CN112617902A (zh) * 2020-12-31 2021-04-09 上海联影医疗科技股份有限公司 一种三维成像系统及成像方法
CN112717281B (zh) * 2021-01-14 2022-07-08 重庆翰恒医疗科技有限公司 一种医疗机器人平台及控制方法
CN114820731A (zh) * 2022-03-10 2022-07-29 青岛海信医疗设备股份有限公司 Ct影像和三维体表图像的配准方法及相关装置
CN115148341B (zh) * 2022-08-02 2023-06-02 重庆大学附属三峡医院 一种基于体位识别的ai结构勾画方法及系统
CN116211353B (zh) * 2023-05-06 2023-07-04 北京大学第三医院(北京大学第三临床医学院) 可穿戴超声膀胱容量测定与多模态影像形貌评估系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1814323A (zh) * 2005-01-31 2006-08-09 重庆海扶(Hifu)技术有限公司 一种聚焦超声波治疗系统
CN101623198A (zh) * 2008-07-08 2010-01-13 深圳市海博科技有限公司 动态肿瘤实时跟踪方法
CN101869501A (zh) * 2010-06-29 2010-10-27 北京中医药大学 计算机辅助针刀定位系统
US20130184569A1 (en) * 2007-05-08 2013-07-18 Gera Strommer Method for producing an electrophysiological map of the heart
CN103295455A (zh) * 2013-06-19 2013-09-11 北京理工大学 基于ct影像模拟与定位的超声培训系统

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6167296A (en) * 1996-06-28 2000-12-26 The Board Of Trustees Of The Leland Stanford Junior University Method for volumetric image navigation
DE10333543A1 (de) * 2003-07-23 2005-02-24 Siemens Ag Verfahren zur gekoppelten Darstellung intraoperativer sowie interaktiv und iteraktiv re-registrierter präoperativer Bilder in der medizinischen Bildgebung
EP2104919A2 (fr) * 2006-11-27 2009-09-30 Koninklijke Philips Electronics N.V. Système et procédé permettant de fondre en temps réel des images ultrasons et des clichés médicaux préalablement acquis
CN101057790A (zh) * 2007-06-22 2007-10-24 北京长江源科技有限公司 用于高强度聚焦超声刀肿瘤治疗定位的方法及装置
CN103366397A (zh) * 2012-03-31 2013-10-23 上海理工大学 基于c形臂2d投影图像的脊柱3d模型构建方法
CN103356284B (zh) * 2012-04-01 2015-09-30 中国科学院深圳先进技术研究院 手术导航方法和系统
CN102651145B (zh) * 2012-04-06 2014-11-05 哈尔滨工业大学 股骨三维模型可视化方法
CN102999902B (zh) * 2012-11-13 2016-12-21 上海交通大学医学院附属瑞金医院 基于ct配准结果的光学导航定位导航方法
CN103020976B (zh) * 2012-12-31 2015-08-19 中国科学院合肥物质科学研究院 一种基于带权模糊互信息的三维医学图像配准方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1814323A (zh) * 2005-01-31 2006-08-09 重庆海扶(Hifu)技术有限公司 一种聚焦超声波治疗系统
US20130184569A1 (en) * 2007-05-08 2013-07-18 Gera Strommer Method for producing an electrophysiological map of the heart
CN101623198A (zh) * 2008-07-08 2010-01-13 深圳市海博科技有限公司 动态肿瘤实时跟踪方法
CN101869501A (zh) * 2010-06-29 2010-10-27 北京中医药大学 计算机辅助针刀定位系统
CN103295455A (zh) * 2013-06-19 2013-09-11 北京理工大学 基于ct影像模拟与定位的超声培训系统

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109011221A (zh) * 2018-09-04 2018-12-18 东莞东阳光高能医疗设备有限公司 一种剂量引导的中子俘获治疗系统及其操作方法
CN111402374A (zh) * 2018-12-29 2020-07-10 曜科智能科技(上海)有限公司 多路视频与三维模型融合方法及其装置、设备和存储介质
CN111402374B (zh) * 2018-12-29 2023-05-23 曜科智能科技(上海)有限公司 多路视频与三维模型融合方法及其装置、设备和存储介质
CN110148208A (zh) * 2019-04-03 2019-08-20 中国人民解放军陆军军医大学 一种基于中国数字人的鼻咽部放疗教学模型构建方法
CN110148208B (zh) * 2019-04-03 2023-07-07 中国人民解放军陆军军医大学 一种基于中国数字人的鼻咽部放疗教学模型构建方法
CN111161399A (zh) * 2019-12-10 2020-05-15 盎锐(深圳)信息科技有限公司 基于二维影像生成三维模型的数据处理方法及组件
CN111161399B (zh) * 2019-12-10 2024-04-19 上海青燕和示科技有限公司 基于二维影像生成三维模型的数据处理方法及组件
CN111311738A (zh) * 2020-03-04 2020-06-19 杭州市第三人民医院 一种采用影像学的输尿管3d数模建立方法及其数据采集装置
CN111311738B (zh) * 2020-03-04 2023-08-11 杭州市第三人民医院 一种采用影像学的输尿管3d数模建立方法及其数据采集装置
CN111862305A (zh) * 2020-06-30 2020-10-30 北京百度网讯科技有限公司 处理图像的方法、装置和计算机存储介质
CN114973887A (zh) * 2022-05-19 2022-08-30 北京大学深圳医院 一种多模块结合实现超声影像一体化的交互显示系统

Also Published As

Publication number Publication date
CN105078514A (zh) 2015-11-25

Similar Documents

Publication Publication Date Title
WO2015161728A1 (fr) Procédé et dispositif de construction de modèle tridimensionnel, et procédé et dispositif et procédé de surveillance d'image
US11883118B2 (en) Using augmented reality in surgical navigation
CN107456278B (zh) 一种内窥镜手术导航方法和系统
CN110464459A (zh) 基于ct-mri融合的介入计划导航系统及其导航方法
JP5345275B2 (ja) 超音波データと事前取得イメージの重ね合わせ
JP5265091B2 (ja) 2次元扇形超音波イメージの表示
JP5622995B2 (ja) 超音波システム用のビーム方向を用いたカテーテル先端部の表示
EP2618739B1 (fr) Amélioration d'un modèle anatomique au moyen d'ultrasons
JP5513790B2 (ja) 超音波診断装置
CN107067398B (zh) 用于三维医学模型中缺失血管的补全方法及装置
CN114129240B (zh) 一种引导信息生成方法、系统、装置及电子设备
KR20090059048A (ko) 3차원 이미지 및 표면 맵핑으로부터의 해부학적 모델링 방법 및 시스템
JP2006305359A (ja) 超音波輪郭再構築を用いた3次元心臓イメージングのためのソフトウエア製品
JP2006305358A (ja) 超音波輪郭再構築を用いた3次元心臓イメージング
JP2006312037A (ja) 超音波を用いた電気解剖学的地図と事前取得イメージの重ね合わせ
KR20130015146A (ko) 의료 영상 처리 방법 및 장치, 영상 유도를 이용한 로봇 수술 시스템
US20130257910A1 (en) Apparatus and method for lesion diagnosis
US20160074012A1 (en) Apparatus and method of ultrasound image acquisition, generation and display
JPH11104072A (ja) 医療支援システム
Yavariabdi et al. Mapping and characterizing endometrial implants by registering 2D transvaginal ultrasound to 3D pelvic magnetic resonance images
CN107527379A (zh) 医用图像诊断装置及医用图像处理装置
Nagelhus Hernes et al. Computer‐assisted 3D ultrasound‐guided neurosurgery: technological contributions, including multimodal registration and advanced display, demonstrating future perspectives
CN110072467A (zh) 提供引导手术的图像的系统
CN107204045A (zh) 基于ct图像的虚拟内窥镜系统
CN113907883A (zh) 一种耳侧颅底外科3d可视化手术导航系统及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15783267

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15783267

Country of ref document: EP

Kind code of ref document: A1