WO2022011984A1 - Image processing method and apparatus, electronic device, storage medium, and program product - Google Patents
Image processing method and apparatus, electronic device, storage medium, and program product Download PDFInfo
- Publication number
- WO2022011984A1 WO2022011984A1 PCT/CN2020/140330 CN2020140330W WO2022011984A1 WO 2022011984 A1 WO2022011984 A1 WO 2022011984A1 CN 2020140330 W CN2020140330 W CN 2020140330W WO 2022011984 A1 WO2022011984 A1 WO 2022011984A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- segmentation result
- neural network
- deformation field
- target object
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 32
- 238000003860 storage Methods 0.000 title claims abstract description 31
- 230000011218 segmentation Effects 0.000 claims abstract description 411
- 238000000034 method Methods 0.000 claims abstract description 115
- 230000009466 transformation Effects 0.000 claims abstract description 37
- 238000013528 artificial neural network Methods 0.000 claims description 223
- 238000012549 training Methods 0.000 claims description 63
- 238000012545 processing Methods 0.000 claims description 51
- 238000004590 computer program Methods 0.000 claims description 20
- 210000004351 coronary vessel Anatomy 0.000 claims description 18
- 230000008569 process Effects 0.000 description 60
- 238000010968 computed tomography angiography Methods 0.000 description 34
- 210000004204 blood vessel Anatomy 0.000 description 33
- 238000001356 surgical procedure Methods 0.000 description 22
- 230000006870 function Effects 0.000 description 19
- 238000010586 diagram Methods 0.000 description 18
- 238000004891 communication Methods 0.000 description 11
- 230000000694 effects Effects 0.000 description 10
- 238000004422 calculation algorithm Methods 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 6
- 230000002792 vascular Effects 0.000 description 6
- 238000013527 convolutional neural network Methods 0.000 description 5
- 238000013146 percutaneous coronary intervention Methods 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 4
- 230000004927 fusion Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 3
- 238000013135 deep learning Methods 0.000 description 3
- 201000010099 disease Diseases 0.000 description 3
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 3
- 238000006073 displacement reaction Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000000747 cardiac effect Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000001427 coherent effect Effects 0.000 description 2
- 239000002872 contrast media Substances 0.000 description 2
- 208000029078 coronary artery disease Diseases 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000002372 labelling Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- KLDZYURQCUYZBL-UHFFFAOYSA-N 2-[3-[(2-hydroxyphenyl)methylideneamino]propyliminomethyl]phenol Chemical compound OC1=CC=CC=C1C=NCCCN=CC1=CC=CC=C1O KLDZYURQCUYZBL-UHFFFAOYSA-N 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 201000001098 delayed sleep phase syndrome Diseases 0.000 description 1
- 208000033921 delayed sleep phase type circadian rhythm sleep disease Diseases 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003902 lesion Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000011524 similarity measure Methods 0.000 description 1
- 238000013526 transfer learning Methods 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10116—X-ray image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
- G06T2207/30101—Blood vessel; Artery; Vein; Vascular
Definitions
- the present disclosure is based on the Chinese patent application with the application number of 202010686919.6 and the filing date of July 16, 2020, and claims the priority of the Chinese patent application.
- the entire content of the Chinese patent application is hereby incorporated into the present disclosure in its entirety. .
- the present disclosure relates to the technical field of image processing, and in particular, to an image processing method and apparatus, an electronic device, a storage medium, and a program product.
- Coronary heart disease has become one of the diseases with the highest mortality in the world, and the common treatment option is percutaneous coronary intervention.
- Percutaneous coronary intervention is the use of a catheter to dilate the narrowed part of the blood vessel under the guidance of intraoperative X-rays to achieve the purpose of treatment.
- the blood vessels displayed in the X-ray image of the coronary artery will become invisible as the contrast agent dissipates, which brings great challenges to the doctor, and the success rate of the operation also depends on the actual experience of the doctor .
- CTA Computed tomography angiography
- the embodiments of the present disclosure provide an image processing method and apparatus, an electronic device, a storage medium, and a program product.
- an image processing method including:
- the positional transformation relationship of each pixel of the target object between the first image and the second image can be determined.
- the image information of the target object between the first image and the second image can be fused into the same coordinate system, so that the image information of the target object contained in the first image and the second image can be used at the same time.
- the subsequent operations that need to be performed provide comprehensive guidance; moreover, since the position transformation relationship is the transformation relationship corresponding to each pixel point of the target object, the information fusion of the target object between the first image and the second image can have a higher level. accuracy.
- the obtaining the deformation field between the first image and the second image according to the first segmentation result and the second segmentation result includes: A segmentation result and the second segmentation result are input to the first neural network to obtain the deformation fields of the first image and the second image.
- the neural network can be used to realize the end-to-end deformation field prediction.
- the acquisition time of the deformation field can be greatly shortened, the efficiency of the deformation field acquisition can be improved, and the whole process can be effectively improved.
- the efficiency of the image processing process and the subsequent image registration process; on the other hand, the deformation field obtained by the neural network can include the positional transformation relationship of each pixel between the first image and the second image, which can maximize the deformation field.
- the degree of freedom improves the precision and accuracy of the deformation field, thereby improving the accuracy of the entire image processing process and the subsequent image registration process.
- the first image includes a three-dimensional image
- the second image includes a two-dimensional image
- the first segmentation result and the second segmentation result are used to obtain the first segmentation result.
- the deformation field between the image and the second image includes: converting the first segmentation result into a two-dimensional third segmentation result according to the collection information of the second image; converting the third segmentation result with the The second segmentation result is input to the first neural network to obtain the deformation field between the first image and the second image.
- the collection information of the two-dimensional second image can be used to project the first segmentation result of the first image to a two-dimensional plane, so that the first image and the second image can be obtained according to the two two-dimensional segmentation results. Therefore, the obtained deformation field can more accurately reflect the transformation relationship between the first image and the second image of the target object, and improve the accuracy and effect of image processing.
- the method further includes: registering the first image and the second image according to the deformation field to obtain a registration result.
- the obtained deformation field can be used to flexibly integrate the target object information contained in the first image and the target object information contained in the second image into one coordinate system, so that the target object-based Operation provides comprehensive and effective guidance.
- the method further includes: obtaining an error loss of the first neural network according to the deformation field; and training the first neural network according to the error loss.
- the transformation relationship between the two input images of the first neural network can be directly used to
- the first neural network is trained without additional training images or labeled data, which reduces the difficulty and cost of training while ensuring the training accuracy of the first neural network.
- the obtaining the error loss of the first neural network according to the deformation field includes: registering the first segmentation result according to the deformation field to obtain a registration After the first segmentation result, the error between the registered first segmentation result and the second segmentation result is used as the error loss of the first neural network; or, according to the deformation field, for all The second segmentation result is registered to obtain the registered second segmentation result, and the error between the registered second segmentation result and the first segmentation result is used as the error of the first neural network.
- the first segmentation result is registered to obtain the registered first segmentation result, and the error between the registered first segmentation result and the second image is calculated As the error loss of the first neural network; or, according to the deformation field, the second segmentation result is registered to obtain the registered second segmentation result, and the registered second segmentation result is obtained.
- the error between the result and the first image serves as the error loss for the first neural network.
- an appropriate method can be flexibly selected to determine the error loss of the first neural network according to the actual situation, thereby improving the flexibility and convenience of training the first neural network.
- the obtaining the first segmentation result of the target object in the first image includes: inputting the first image into a second neural network to obtain the target object in the first image The first segmentation result of the first segmentation result of the target object, wherein the first neural network is further configured to obtain the difference between the first image and the second image according to the first segmentation result and the second segmentation result deformation field.
- the target object in the first image is segmented through the second neural network or the first neural network to obtain the first segmentation result, which can effectively improve the obtaining efficiency of the first segmentation result.
- the second neural network or the first neural network can be obtained by training the first training image containing the target object annotation, the first segmentation result obtained based on the second neural network or the first neural network can have a higher Accurate segmentation effect.
- the target object in the first image is segmented through the first neural network to obtain a first segmentation result, and the deformation field between the first image and the second image is further obtained through the first neural network.
- the acquiring the second segmentation result of the target object in the second image includes: inputting the second image into a third neural network to obtain the target object in the second image The second segmentation result of the second segmentation result of the target object, wherein the first neural network is further configured to obtain the difference between the first image and the second image according to the first segmentation result and the second segmentation result deformation field.
- the target object in the second image is segmented through the third neural network or the first neural network to obtain the second segmentation result, which can effectively improve the obtaining efficiency of the second segmentation result.
- the third neural network or the first neural network can be obtained by training the second training image containing the target object annotation, the second segmentation result obtained based on the third neural network or the first neural network can have higher Accurate segmentation effect.
- the target object in the second image is segmented through the first neural network to obtain a second segmentation result, and the deformation field between the first image and the second image is obtained through the first neural network.
- the accuracy of the obtained deformation field can be further improved, and the acquisition process from the second image end to the deformation field end can be directly realized through the first neural network, and can also be obtained through the first neural network.
- the first neural network directly realizes the acquisition process from the two image ends of the first image and the second image to the deformation field end.
- the first image includes an electronic computed tomography angiography CTA image
- the second image includes an X-ray image
- the target object includes a coronary artery object.
- the image processing method proposed in the embodiment of the present disclosure can effectively predict the deformation field between the CTA image and the X-ray image , thereby unifying the two modal data of coronary surgery into the same coordinate system, compensating for coronary blood vessels that cannot be seen on X-ray images during coronary surgery, providing better guidance for coronary surgery, and reducing the need for doctors.
- the complexity of the operation increases the success rate of the operation.
- an image processing apparatus including:
- the first segmentation module is configured to obtain the first segmentation result of the target object in the first image
- the second segmentation module is configured to obtain the second segmentation result of the target object in the second image
- the deformation field acquisition module is configured to The first segmentation result and the second segmentation result obtain a deformation field between the first image and the second image, wherein the deformation field includes the target object between the first image and the second image.
- an electronic device comprising: a processor; a memory configured to store instructions executable by the processor; wherein the processor is configured to invoke the instructions stored in the memory , to perform the above image processing method.
- a computer-readable storage medium having computer program instructions stored thereon, the computer program instructions implementing the above-mentioned image processing method when executed by a processor.
- a computer program product stores one or more program instructions, and the program instructions are loaded and executed by a processor to implement the above-mentioned image processing method.
- the first segmentation result and the second segmentation result of the target object in the first image and the second image are obtained respectively, so as to obtain the first segmentation result and the second segmentation result according to the first segmentation result and the second segmentation result. and the deformation field between the second image.
- the positional transformation relationship of each pixel of the target object between the first image and the second image can be determined, and the image information of the target object between the first image and the second image can be fused by using the positional transformation relationship to the same coordinate system, so that the image information of the target object contained in the first image and the second image can be used at the same time to provide comprehensive guidance for the subsequent operations of the target object; Therefore, the information fusion of the target object between the first image and the second image can have higher precision.
- FIG. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
- FIG. 2 shows a schematic diagram of a training process of a registration neural network in an application example according to the present disclosure.
- FIG. 3 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
- FIG. 4 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
- FIG. 5 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
- FIG. 1 shows a flowchart of an image processing method according to an embodiment of the present disclosure.
- the method may be applied to an image processing apparatus, and the image processing apparatus may be a terminal device, a server, or other processing devices.
- the terminal device may be user equipment (User Equipment, UE), mobile device, user terminal, terminal, cellular phone, cordless phone, Personal Digital Assistant (PDA), handheld device, computing device, in-vehicle device, available wearable devices, etc.
- UE user equipment
- PDA Personal Digital Assistant
- the image processing method may be implemented by the processor calling computer-readable instructions stored in the memory.
- the image processing method may include:
- Step S11 obtaining a first segmentation result of the target object in the first image.
- Step S12 acquiring a second segmentation result of the target object in the second image.
- Step S13 obtaining a deformation field between the first image and the second image according to the first segmentation result and the second segmentation result, wherein the deformation field includes each pixel of the target object between the first image and the second image position transformation relationship.
- the target object can be any object that needs to be registered between the two images. Its implementation form can be flexibly determined according to the actual application scenario of the image processing method proposed in the embodiments of the present disclosure.
- the image processing method proposed by the embodiments of the present disclosure can be flexibly applied to various scenarios according to actual requirements.
- the method proposed by the embodiments of the present disclosure may be applied in a surgical procedure, for example, may be used to register an image captured before surgery and an image captured during surgery, Or register the images taken before the operation and the images taken after the operation, etc.
- the realization form of the target object can be flexibly changed according to the different objects of the operation.
- the method proposed by the embodiments of the present disclosure may be applied to coronary artery surgery, such as percutaneous coronary intervention, etc.
- the target object may be a coronary artery object or the like.
- the method proposed by the embodiments of the present disclosure can also be applied to other scenarios, for example, can be applied to the process of diagnosing a patient's disease, for example, can be used to diagnose a patient for a certain period of time
- the realization form of the target object can be flexibly changed according to the different positions of the monitored lesions.
- the method proposed by the embodiments of the present disclosure may be applied to monitor the condition of the patient's heart, in this case, the target object may be a heart object or the like.
- the subsequent disclosed embodiments are described by taking the image processing method used for the operation of the coronary artery as an example, and the target object is a coronary artery object. Flexible expansion is performed according to the subsequent disclosed embodiments, and will not be expanded one by one.
- the realization forms of the first image and the second image can also be flexibly determined according to the application scenario of the image processing method.
- the second image may be an image captured at different time periods before, during, or after coronary artery surgery, and the actual selection is not limited to the following disclosed embodiments.
- the first image may be an image captured before surgery
- the second image may be an image captured during surgery.
- the first image and the second image may also be images with different attributes or types, for example, the first image may be a three-dimensional image, the second image may be a two-dimensional image, and the like.
- the first image can include a three-dimensional CTA image captured before surgery
- the second image may include an X-ray image taken during the operation
- the target object may include a coronary artery object.
- the image processing method proposed in the embodiment of the present disclosure can effectively predict the deformation field between the CTA image and the X-ray image , thereby unifying the two modal data of coronary surgery into the same coordinate system, compensating for coronary blood vessels that cannot be seen on X-ray images during coronary surgery, providing better guidance for coronary surgery, and reducing the need for doctors.
- the complexity of the operation increases the success rate of the operation.
- the second image may include multiple X-ray images, that is, the registration between the CTA image and the multiple X-ray images may be implemented.
- the multiple X-ray images may be multiple X-ray images captured in real time of a coronary artery object during coronary surgery, by matching the CTA images with the multiple X-ray images captured during the surgery. It can realize real-time image registration in coronary surgery, so as to better display the position of blood vessels in real time during the operation, and provide real-time and accurate guidance and assistance to the doctor during the operation.
- the first segmentation result of the target object can be obtained from the first image and the first segmentation result of the target object can be obtained from the second image through step S11 and step S12 respectively.
- the numbers such as "first" and "second" in the first segmentation result and the second segmentation result are only used to distinguish the segmentation results obtained from different images, and do not limit the realization form of the segmentation results.
- the realization forms of the first segmentation result and the second segmentation result are flexibly determined by the realization forms of the corresponding segmented images and the target object.
- the implementation form of step S11 and step S12 is not limited. For details, please refer to the following disclosed embodiments, which will not be expanded here. It should be noted that, in the embodiment of the present disclosure, the implementation order of step S11 and step S12 is not limited, and step S11 and step S12 may be performed sequentially in a certain order according to requirements, or may be performed simultaneously.
- the deformation field between the first image and the second image can be determined based on the first segmentation result and the second segmentation result through step S13, wherein the deformation field can reflect the The position transformation relationship of each pixel point between the first image and the second image.
- the implementation form of step S13 can be flexibly selected according to the actual situation. For details, please refer to the subsequent disclosed embodiments, which will not be expanded here.
- the first segmentation result and the second segmentation result of the target object in the first image and the second image are obtained respectively, so as to obtain the first segmentation result and the second segmentation result according to the first segmentation result and the second segmentation result. and the deformation field between the second image.
- the positional transformation relationship of each pixel of the target object between the first image and the second image can be determined, and the image information of the target object between the first image and the second image can be fused by using the positional transformation relationship to the same coordinate system, so that the image information of the target object contained in the first image and the second image can be used at the same time to provide comprehensive guidance for the subsequent operations of the target object; Therefore, the information fusion of the target object between the first image and the second image can have higher precision.
- the manner of obtaining the first segmentation result of the target object from the first image is not limited.
- the first segmentation result may be obtained from the first image by applying any blood vessel segmentation algorithm in the image.
- step S11 may include:
- the first image is input to the second neural network to obtain a first segmentation result of the target object in the first image, wherein the second neural network is trained by the first training image containing the target object annotation.
- the first image is input into the first neural network to obtain the first segmentation result of the target object in the first image, wherein the first neural network is also used to obtain the first image and the second segmentation result according to the first segmentation result and the second segmentation result.
- the deformation field between the two images is input into the first neural network to obtain the first segmentation result of the target object in the first image, wherein the first neural network is also used to obtain the first image and the second segmentation result according to the first segmentation result and the second segmentation result.
- the target object in the first image can be segmented through the second neural network having the segmentation function, thereby obtaining the first segmentation result.
- the implementation form of the second neural network can be flexibly determined according to the actual situation, and is not limited to the following disclosed embodiments.
- a convolutional neural network U-Net
- the first training image for training the second neural network can also be flexibly selected according to the actual situation of the first image.
- the first training image may include Pixel-by-pixel vessel annotated CTA images.
- the target object in the first image can be segmented through the first neural network with the segmentation function, thereby obtaining the first segmentation result.
- the first neural network can not only be used to segment the target object in the first image, but also has a deformation field acquisition function, that is, it can be used to obtain a deformation field according to the first segmentation result and the first Divide the result into two to obtain the deformation field between the first image and the second image.
- the first neural network may sequentially obtain the first segmentation result of the first image by inputting the first image and the second segmentation result, and obtain the first image according to the first segmentation result and the second segmentation result and the deformation field between the second image.
- the first neural network can be used to segment the target object in the first image and obtain the deformation field between the first image and the second image
- the first neural network can be the same as the above-mentioned second neural network
- the training is performed by the first training image containing the target object annotation
- the training can also be performed according to the first segmentation result and the second segmentation result, wherein the first segmentation result can be the target object annotation in the first training image. Therefore, in a possible implementation manner, the first neural network may be trained by using the first training image and the second segmentation result marked with the target object.
- the implementation form and training process of the first neural network can also be flexibly selected according to the actual situation.
- the target object in the first image is segmented by the second neural network or the first neural network to obtain the first segmentation result, which can effectively improve the obtaining efficiency of the first segmentation result.
- the second neural network or the first neural network It can be obtained by training on the first training image containing the target object label. Therefore, the first segmentation result obtained based on the second neural network or the first neural network can have a higher-precision segmentation effect.
- the target object in the first image is segmented through the first neural network to obtain a first segmentation result
- the deformation field between the first image and the second image is further obtained through the first neural network
- step S12 may include:
- the second image is input to the third neural network to obtain a second segmentation result of the target object in the second image, wherein the third neural network is trained by the second training image containing the target object annotation.
- the first neural network Inputting the second image into the first neural network to obtain a second segmentation result of the target object in the second image, wherein the first neural network is also used to obtain the first image according to the first segmentation result and the second segmentation result and the deformation field between the second image.
- the target object in the second image may be segmented through a third neural network with a segmentation function, thereby obtaining a second segmentation result.
- the implementation form of the third neural network can be flexibly determined according to the actual situation, and is not limited to the following disclosed embodiments.
- the U-Net network can also be used as the third neural network.
- the second training image for training the third neural network can also be flexibly selected according to the actual situation of the second image.
- the first training image can be X-ray images containing pixel-by-pixel vessel annotations.
- the target object in the second image can be segmented by using the first neural network with segmentation function to obtain the second segmentation result.
- the first neural network can not only be used to segment the target object in the second image, but also has a deformation field acquisition function, that is, it can be used to obtain a deformation field according to the first segmentation result and the third Divide the result into two to obtain the deformation field between the first image and the second image.
- the first neural network can sequentially obtain the second segmentation result of the second image by inputting the second image and the first segmentation result, and obtain the first image according to the second segmentation result and the first segmentation result and the deformation field between the second image.
- the first neural network may also be used to segment the target object in the first image, so in a possible implementation manner, the first neural network may also include segmenting the first image , segmenting the second image and obtaining the deformation field.
- the first neural network can obtain the first segmentation result and the first segmentation result of the first image by inputting the first image and the second image, respectively.
- the second segmentation result of the two images, and the deformation field between the first image and the second image is obtained according to the first segmentation result and the second segmentation result.
- the first neural network can be used to segment the target object in the second image and obtain the deformation field between the first image and the second image
- the first neural network can be similar to the third neural network described above.
- training can be performed on the second training image containing the target object annotation, and training can also be performed according to the first segmentation result and the second segmentation result, wherein the second segmentation result can be the target object annotation in the second training image. Therefore, In a possible implementation manner, the first neural network can be trained by using the second training image marked with the target object and the first segmentation result.
- the first neural network can both perform training on the first image
- the first neural network can be simultaneously trained by the first training image containing the target object annotation and the second training image containing the target object annotation.
- the implementation form and training process of the first neural network can also be flexibly selected according to the actual situation.
- the labels such as “first”, “second” and “third” in the first neural network, the second neural network, and the third neural network in the embodiments of the present disclosure are only used to distinguish different The functional neural network does not limit the implementation form of the neural network. In the embodiments of the present disclosure, the implementation forms of the first neural network, the second neural network, and the third neural network may be the same or different.
- the target object in the second image is segmented by the third neural network or the first neural network to obtain the second segmentation result, which can effectively improve the obtaining efficiency of the second segmentation result.
- the third neural network or the first neural network It can be obtained by training on the second training image containing the target object label. Therefore, the second segmentation result obtained based on the third neural network or the first neural network can have a higher-precision segmentation effect.
- the target object in the second image is segmented through the first neural network to obtain a second segmentation result
- the deformation field between the first image and the second image is further obtained through the first neural network , through the above process, on the basis of improving the segmentation effect of the second segmentation result, the accuracy of the obtained deformation field can be further improved, and the acquisition process from the second image end to the deformation field end can be directly realized through the first neural network, and also The acquisition process from the two image ends, the first image and the second image, to the deformation field end is directly realized through the first neural network.
- step S13 may include:
- the first segmentation result and the second segmentation result are input into the first neural network to obtain the deformation fields of the first image and the second image.
- the pixel position transformation relationship between the first segmentation result and the second segmentation result can be performed by the first neural network with the function of obtaining the deformation field. Extraction, thereby obtaining the deformation field between the first segmentation result and the second segmentation result.
- the deformation field between the first segmentation result and the second segmentation result can be directly used as the deformation field between the first image and the second image; in a possible implementation , or according to the relationship between the first image and the first segmentation result, and the relationship between the second image and the second segmentation result, this deformation field can be correspondingly converted into the transformation relationship between the two images, so as to obtain The deformation field between the first image and the second image.
- the implementation form of the first neural network can be flexibly determined according to the actual situation, and is not limited to the following disclosed embodiments.
- a U-Net network can be used as the first neural network.
- How to train the first neural network so that it can determine the deformation field according to the input first segmentation result and the second segmentation result, the training process can refer to the following disclosed embodiments, which will not be expanded here.
- the first segmentation result and the second segmentation result are processed through the first neural network to obtain the deformation field between the first image and the second image.
- the neural network can be used to realize the end-to-end deformation Compared with determining the position transformation relationship pixel by pixel, field prediction can greatly shorten the acquisition time of the deformation field, improve the acquisition efficiency of the deformation field, and then effectively improve the efficiency of the entire image processing process and subsequent image registration process; on the other hand,
- the deformation field obtained by the neural network can include the positional transformation relationship of each pixel between the first image and the second image, which can maximize the degree of freedom of the deformation field, improve the precision and accuracy of the deformation field, and thus improve the overall Accuracy of image processing and subsequent image registration.
- the first image and the second image may have different properties.
- the first image may include a three-dimensional image
- the second image may include a two-dimensional image.
- step S13 may include:
- Step S131 converting the first segmentation result into a two-dimensional third segmentation result according to the collection information of the second image
- Step S132 the third segmentation result and the second segmentation result are input to the first neural network to obtain the deformation field between the first image and the second image.
- the first segmentation result obtained from the first image may be a three-dimensional segmentation result
- the second The second segmentation result obtained in the image may be a two-dimensional segmentation result
- step S131 may be used to convert the first segmentation result into a two-dimensional third segmentation result according to the acquisition information of the second image.
- the collection information of the second image may be any information related to the collection angle or collection method of the second image during the second image collection process, and its implementation form can be flexibly determined according to the actual situation, and is not limited to the following disclosure Example.
- the acquisition information may include header file information of Digital Imaging and Communications in Medicine (DICOM) of the second image. Reading the DICOM header file information can determine the angle at which the X-ray image was taken.
- DICOM Digital Imaging and Communications in Medicine
- the manner of converting the first segmentation result into the two-dimensional third segmentation result according to the collected information is also not limited, and can be flexibly determined according to the actual situation of the collected information.
- the collection information may include DICOM header file information
- the shooting angle of the second image may be determined according to the DICOM header file information
- the first segmentation result may be projected according to the shooting angle , to get the third segmentation result.
- the manner of projecting the first segmentation result is not limited.
- the projected third segmentation result may be obtained by using a ray projection algorithm.
- the third segmentation result and the second segmentation result may be input into the first neural network through step S132 to obtain the deformation field between the first image and the second image.
- the processing methods of the first neural network and the first neural network on the third segmentation result and the second segmentation result reference may be made to the processing methods of the first neural network on the first segmentation result and the second segmentation result in the above disclosed embodiments, It is not repeated here.
- the deformation field obtained by the first neural network based on the third segmentation result and the second segmentation result is the third segmentation result.
- the deformation field between the segmentation result and the second segmentation result in a possible implementation, this deformation field can be directly used as the deformation field between the first image and the second image; in a possible implementation , it is also possible to further process this deformation field according to the corresponding relationship between the transformation of the first image to the third segmentation result and the transformation of the second image to the second segmentation result to obtain the difference between the first image and the second image. direct deformation field.
- the subsequent processing operations performed on the first image and the second image by using the deformation field may also change accordingly.
- the process of converting the first segmentation result into the third segmentation result can also be implemented by the first neural network.
- the first neural network can directly convert the first segmentation result and the second segmentation result as input, the conversion from the first segmentation result to the third segmentation result is sequentially performed inside the neural network, and the deformation between the first image and the second image is obtained according to the third segmentation result and the second segmentation result. field.
- the first segmentation result is converted into a two-dimensional third segmentation result according to the acquisition information of the second image, so that the third segmentation result and the The second segmentation result is input to the first neural network to obtain the deformation field between the first image and the second image.
- the first segmentation result of the first image can be divided into Projection to a two-dimensional plane, so as to obtain the deformation field between the first image and the second image according to the two two-dimensional segmentation results, so that the obtained deformation field can more accurately reflect the difference between the first image and the second image of the target object.
- the transformation relationship between the two images improves the accuracy and effect of image processing.
- the method proposed by the embodiment of the present disclosure may further include:
- the first image and the second image are registered to obtain a registration result.
- the deformation field can reflect the positional transformation relationship of each pixel of the target object between the first image and the second image. Therefore, the target object in the first image and the second image can be transformed by the deformation field. The target object in the image is transformed into the same coordinate system, so as to realize the registration between the first image and the second image, and obtain the registration result.
- the deformation field may be the deformation field between the segmentation results, such as the deformation field between the first segmentation result and the second segmentation result, or the third segmentation result The result or the deformation field between the second segmentation results, etc.
- the process of registering the first image and the second image can be the process of deforming the corresponding segmentation results according to the deformation field, that is, Transform the first segmentation result into the coordinate system of the second segmentation result using the deformation field, transform the third segmentation result into the coordinate system of the second segmentation result using the deformation field, and transform the second segmentation result into the first segmentation result using the deformation field or the coordinate system used to transform the second segmentation result to the third segmentation result by using the deformation field, etc.
- the deformation field can also be obtained by further processing the deformation field between the images on the basis of the deformation field of the segmentation result, that is, the direct deformation field between the first image and the second image
- the process of registering the first image and the second image may be a deformation process by directly processing the first image or the second image, that is, using the deformation field to transform the first image to the second image.
- the coordinate system of the second image, or the coordinate system of the second image is transformed into the first image by using the deformation field, etc.
- the registration process may not be limited to the image or the coordinate system where the segmentation result is located.
- a deformation field may be used to register both the first image and the second image to a preset coordinate system, or both the first segmentation result and the second segmentation result are registered to a preset coordinate system, and so on.
- the registration result can be obtained by comparing the images to be registered by using Spatial Transformer Networks (STN).
- STN Spatial Transformer Networks
- the obtained deformation field can be used to flexibly combine the target object information contained in the first image with the target object contained in the second image Information is unified and fused into a single coordinate system to provide comprehensive and effective guidance on the object-based operations to be performed.
- the first neural network may be used to obtain the deformation field.
- the first neural network can be trained to have higher accuracy. That is, the image processing method proposed by the embodiment of the present disclosure can also be used in the training process of the first neural network.
- the image processing method proposed by the embodiment of the present disclosure may include:
- Step S11 obtaining a first segmentation result of the target object in the first image.
- Step S12 acquiring a second segmentation result of the target object in the second image.
- Step S13 obtaining a deformation field between the first image and the second image according to the first segmentation result and the second segmentation result.
- Step S14 according to the deformation field, obtain the error loss of the first neural network.
- Step S15 train the first neural network according to the error loss.
- the first neural network may be an untrained neural network, or may be a trained but incompletely trained neural network.
- step S14 may include:
- Step S141 according to the deformation field, register the first segmentation result, obtain the registered first segmentation result, and use the error between the registered first segmentation result and the second segmentation result as the error of the first neural network. error loss. or,
- Step S142 According to the deformation field, the second segmentation result is registered to obtain the registered second segmentation result, and the error between the registered second segmentation result and the first segmentation result is used as the error of the first neural network. error loss. or,
- Step S143 According to the deformation field, the first segmentation result is registered to obtain the registered first segmentation result, and the error between the registered first segmentation result and the second image is used as the error of the first neural network. loss. or,
- Step S144 register the second segmentation result according to the deformation field, obtain the registered second segmentation result, and use the error between the registered second segmentation result and the first image as the error of the first neural network loss.
- the deformation field can reflect the transformation relationship between the first segmentation result and the second segmentation
- the first segmentation result after registration can be obtained by using the deformation field output by the first neural network to register the first segmentation result.
- the deformation field is completely accurate, the registered first segmentation result and the second segmentation result will be consistent. Therefore, through the error between the registered first segmentation result and the second segmentation result, the first The error of the deformation field output by the neural network is used as the error loss of the first neural network to train the first neural network, which can improve the accuracy of the first neural network obtained after training.
- the deformation field can also be used to register the second segmentation result, so that the error between the registered second segmentation result and the first segmentation result can be used to determine the error of the deformation field output by the first neural network. , and then determine the error loss of the first neural network.
- the acquisition of the error loss of the first neural network may be configured during the training process of the first neural network, and during the training process, the first segmentation result input to the first neural network may be marked with is located on the first image in the form of , and the second segmentation result may also be located on the second image in the form of annotations. Therefore, in this case, the error between the registered first segmentation result and the second image where the second segmentation result is located, or the error between the registered second segmentation result and the first segmentation result The error between the first images is used as the error of the first neural network.
- the deformation field may be the deformation field between the first segmentation result and the second segmentation result, the deformation field between the third segmentation result and the second segmentation result, or the deformation field between the third segmentation result and the second segmentation result.
- the deformation field between one image and the second image, etc. therefore, with the different objects pointed by the deformation field, the determined error can change flexibly.
- the deformation field is the difference between the third segmentation result and the second segmentation result
- the deformation field can be used to register the third segmentation result to obtain the registered third segmentation result, and then determine the third segmentation result according to the error between the registered third segmentation result and the second segmentation result.
- the manner of calculating the error between different objects can be flexibly selected according to the actual situation, and is not limited to the following disclosed embodiments.
- the calculation method of a loss function such as Mean Squared Error (MSE) or Normalized Cross Correlation (NCC) can be used to determine the error between different objects.
- MSE Mean Squared Error
- NCC Normalized Cross Correlation
- an appropriate method can be flexibly selected to determine the error loss of the first neural network according to the actual situation, thereby improving the flexibility and convenience of training the first neural network.
- the first neural network can be trained through step S15, and the training method can be flexibly determined according to the actual situation, and is not limited to the following disclosed embodiments.
- various network parameters and the like in the first neural network may be updated by using the method of back propagation according to the error loss of the first neural network.
- the transformation relationship between the two input images of the first neural network can be directly used to
- the first neural network is trained without additional training images or labeled data, which reduces the difficulty and cost of training while ensuring the training accuracy of the first neural network.
- Coronary heart disease has become one of the diseases with the highest mortality in the world, and the common treatment option is percutaneous coronary intervention.
- Percutaneous coronary intervention is the use of a catheter to dilate the narrowed part of the blood vessel under the guidance of intraoperative X-rays to achieve the purpose of treatment.
- the blood vessels displayed in the X-ray image of the coronary artery will become invisible as the contrast agent dissipates, which brings great challenges to the doctor, and the success rate of the operation also depends on the actual experience of the doctor .
- Preoperative CTA images can show the three-dimensional vascular structure well, but since CTA images cannot be captured in real time during the operation, it is necessary to register the preoperative CTA and intraoperative X-ray images to fuse them into the same coordinate system to provide doctors with information. Better guidance reduces the complexity of the surgery for doctors and improves the success rate of surgery.
- the coronary registration method in the related art regards the registration problem as an optimization problem, defines a similarity to measure the distance between two blood vessels, and iteratively optimizes the distance to find an optimal transformation matrix.
- Another scheme extends the nearest iterative point method from point sets to curves, and proposes an iterative nearest curve algorithm for the registration of curve structures.
- There is also a probabilistic and statistical precise registration scheme for coherent point drift which defines the registration of two point sets as a probability density estimation problem.
- the above schemes require iterative optimization using the most recent iterative point or the coherent point drift, which is often difficult to meet the requirements of intraoperative real-time performance.
- deformations such as B-splines or thin-plate splines are used in the scheme, which cannot well meet the complex vessel deformation, resulting in low registration accuracy.
- Deep learning techniques have made great achievements in the field of computer vision and also provide new solutions for medical image registration.
- One scheme trains a fully convolutional neural network to perform non-rigid registration of 3D brain MR images using "self-supervision"; the other uses normalized cross-correlation to train a fully convolutional neural network to predict deformation fields, to register 4D cardiac MR images; another scheme uses convolutional neural networks and spatial transformation networks to register T1-weighted brain MR images; another scheme uses convolutional neural networks and spatial transformation networks to register T1-weighted brain MR images registration; another approach utilizes a transfer learning-based approach to separately register X-ray and cardiac sequence images.
- the embodiments of the present disclosure propose an end-to-end coronary registration method.
- the blood vessel bundle of the preoperative CTA image and the blood vessel bundle of the intraoperative X-ray image are firstly segmented, and the data of two different modalities and dimensions are unified into one coordinate system by using the ray projection method, and then input into a single coordinate system.
- the deformation field is directly predicted in the U-Net network.
- the method of the embodiment of the present disclosure can predict the deformation field end-to-end, and meet the requirements of intraoperative real-time performance while ensuring the registration accuracy.
- the embodiment of the present disclosure proposes an image processing method, which can perform real-time registration of a preoperative CTA image and an intraoperative X-ray image of a coronary artery.
- the image processing process can be as follows:
- the 3D U-Net network (ie the second neural network in the above disclosed embodiment) is used to segment the preoperative CTA image (ie the first image in the above disclosed embodiment), and the blood vessels in the CTA image are extracted bundle (that is, the first segmentation result in the above disclosed embodiment);
- the U-Net network (ie the third neural network in the above disclosed embodiment) is used to segment the intraoperative X-ray image (ie the second image in the above disclosed embodiment), and the blood vessel bundles in the X-ray image (ie the above-mentioned the second segmentation result in the disclosed embodiment);
- Read the header file information of the DICOM in the X-ray image that is, the acquisition information in the above-mentioned disclosed embodiments
- use the light projection algorithm to generate a digitally reconstructed radiological image for the blood vessel bundle in the CTA image, and obtain a two-dimensional blood vessel projection map (ie the third segmentation result in the above disclosed embodiment);
- the registration neural network ie, the first neural network in the above disclosed embodiment
- the registration neural network used in this process
- the deformation field can be directly predicted end-to-end, which greatly improves the real-time performance; at the same time, the process predicts the displacement of each pixel point, maximizes the degree of freedom of the deformation field, and improves the registration accuracy.
- the two-dimensional blood vessel projection map or the blood vessel bundle in the X-ray image can be transformed to complete the registration process.
- the application example of the present disclosure also proposes an image processing method, which can be used for each of the above-mentioned neural networks. To train:
- FIG. 2 shows a schematic diagram of a training process of a registration neural network in an application example of the present disclosure.
- the training process may be:
- an untrained initial registration neural network 206 which can be a U-Net network
- the projected vascular bundle is deformed according to the predicted deformation field to obtain the deformed vascular bundle 209;
- the calculation method can use mean square error or normalized cross-correlation, etc., and then use the back propagation algorithm to update the registration neural network. parameters to complete the training process of the registration neural network.
- the embodiment of the present disclosure can obtain an end-to-end coronary registration network.
- the network can directly predict the deformation field and complete the registration task: Using the trained U-Net to segment the X-ray image and the intraoperative X-ray image to obtain the blood vessel bundle of the X-ray image; read the DICOM header file information, and use the preoperative CTA blood vessel bundle to generate the projected blood vessel bundle; The beams are fed into the registration network and the deformation fields are obtained.
- the registration method can greatly improve the registration accuracy.
- the radiologist can use the method proposed in the application example of the present disclosure to perform fast and accurate registration, and unify the data of the two modalities into the same one In the coordinate system, it compensates for the problem of coronary vessels that cannot be seen on intraoperative X-ray images.
- the application example of the present disclosure can perform real-time registration of the coronary arteries included in the preoperative CTA image and the intraoperative X-ray image, the intraoperative X-ray image can better display the position of the catheter, so that the doctor can perform the operation during the operation. Have a better judgement of the direction in which the catheter is traveling.
- the image processing method in the embodiment of the present disclosure is not limited to be applied to the above-mentioned processing of coronary images of the heart, and may be applied to any image processing, which is not limited in the embodiment of the present disclosure.
- embodiments of the present disclosure also provide image processing apparatuses, electronic devices, computer-readable storage media, and programs, all of which can be used to implement any image processing method provided by the embodiments of the present disclosure, and the corresponding technical solutions and descriptions and refer to the methods Some of the corresponding records will not be repeated.
- FIG. 3 shows a block diagram of an image processing apparatus according to an embodiment of the present disclosure.
- the image processing apparatus may be a terminal device, a server, or other processing devices.
- the terminal device may be user equipment (User Equipment, UE), mobile device, user terminal, terminal, cellular phone, cordless phone, Personal Digital Assistant (PDA), handheld device, computing device, in-vehicle device, available wearable devices, etc.
- UE User Equipment
- PDA Personal Digital Assistant
- the image processing apparatus may be implemented by a processor invoking computer-readable instructions stored in a memory.
- the image processing apparatus 30 may include:
- the first segmentation module 31 is configured to obtain a first segmentation result of the target object in the first image.
- the second segmentation module 32 is configured to obtain a second segmentation result of the target object in the second image.
- the deformation field acquiring module 33 is configured to obtain a deformation field between the first image and the second image according to the first segmentation result and the second segmentation result, wherein the deformation field includes the target object between the first image and the second image The position transformation relationship of each pixel of .
- the deformation field acquisition module is configured to input the first segmentation result and the second segmentation result into the first neural network to obtain the deformation fields of the first image and the second image.
- the first image includes a three-dimensional image
- the second image includes a two-dimensional image
- the deformation field acquisition module is configured to convert the first segmentation result into a two-dimensional third image according to the acquisition information of the second image Segmentation result; input the third segmentation result and the second segmentation result to the first neural network to obtain the deformation field between the first image and the second image.
- the image processing apparatus 30 further includes: a registration module, configured to register the first image and the second image according to the deformation field to obtain a registration result.
- the image processing apparatus 30 further includes: an error acquisition module, configured to acquire the error loss of the first neural network according to the deformation field; train.
- the error acquisition module is configured to register the first segmentation result according to the deformation field, obtain the registered first segmentation result, and compare the registered first segmentation result with the second segmentation result.
- the error between the results is used as the error loss of the first neural network; or, according to the deformation field, the second segmentation result is registered to obtain the registered second segmentation result, and the registered second segmentation result is compared with the first segmentation result.
- the error between the first segmentation results is used as the error loss of the first neural network; according to the deformation field, the first segmentation results are registered to obtain the registered first segmentation results, and the registered first segmentation results and the first segmentation results are obtained.
- the error between the two images is used as the error loss of the first neural network; or, according to the deformation field, the second segmentation result is registered to obtain the registered second segmentation result, and the registered second segmentation result is compared with The error between the first images is used as the error loss of the first neural network.
- the first segmentation module is configured to input the first image into the second neural network to obtain a first segmentation result of the target object in the first image, wherein the second neural network is marked by including the target object
- the first training image is trained; or, the first image is input into the first neural network, and the first segmentation result of the target object in the first image is obtained, wherein the first neural network is also used for according to the first segmentation result and the first segmentation result
- the deformation field between the first image and the second image is obtained.
- the second segmentation module is configured to input the second image into a third neural network to obtain a second segmentation result of the target object in the second image, wherein the third neural network is marked by including the target object
- the second training image is trained; or, the second image is input to the first neural network to obtain the second segmentation result of the target object in the second image, wherein the first neural network is also used to As a result of the binary segmentation, the deformation field between the first image and the second image is obtained.
- the first image includes an electronic computed tomography angiography CTA image
- the second image includes an X-ray image
- the target object includes a coronary artery object.
- An embodiment of the present disclosure further provides a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the above-mentioned image processing method is implemented.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- An embodiment of the present disclosure further provides an electronic device, comprising: a processor; a memory configured to store instructions executable by the processor; wherein the processor is configured to invoke the instructions stored in the memory to execute the above method.
- Embodiments of the present disclosure also provide a computer program product, including computer-readable codes.
- a processor in the device executes the image processing method for implementing the image processing method provided by any of the above embodiments. instruction.
- Embodiments of the present disclosure further provide another computer program product for storing computer-readable instructions, which, when executed, cause the computer to perform the operations of the image processing method provided by any of the foregoing embodiments.
- the electronic device may be provided as a terminal, server or other form of device.
- FIG. 4 shows a block diagram of an electronic device 800 according to an embodiment of the present disclosure.
- electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, and personal digital assistant, among other terminals.
- an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812 , sensor component 814 , and communication component 816 .
- the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
- the processing component 802 can include one or more processors 820 to execute instructions to perform all or some of the steps of the methods described above.
- processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components.
- processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
- Memory 804 is configured to store various types of data to support operation at electronic device 800 . Examples of such data include instructions for any application or method operating on electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like.
- the memory 804 may be implemented by any type of volatile or non-volatile storage device or a combination thereof, such as Static Random-Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory) Erasable Programmable Read-Only Memory, EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read Only Memory (Read Only Memory) Memory, ROM), magnetic memory, flash memory, magnetic disk or optical disk.
- SRAM Static Random-Access Memory
- EPROM Erasable Programmable Read-Only Memory
- PROM Programmable Read-Only Memory
- Read Only Memory Read Only Memory
- Power supply assembly 806 provides power to various components of electronic device 800 .
- Power supply components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 800 .
- Multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
- the screen may include a Liquid Crystal Display (LCD) and a touch panel (TouchPanel, TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
- the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
- the multimedia component 808 includes a front-facing camera and/or a rear-facing camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
- Audio component 810 is configured to output and/or input audio signals.
- the audio component 810 includes a microphone (Microphone, MIC) configured to receive external audio signals when the electronic device 800 is in an operating mode, such as a calling mode, a recording mode, and a voice recognition mode.
- the received audio signal may be further stored in memory 804 or transmitted via communication component 816 .
- audio component 810 also includes a speaker for outputting audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
- Sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of electronic device 800 .
- sensor assembly 814 can detect the open/closed state of electronic device 800 and the relative positioning of the assembly.
- the components are the display and keypad of the electronic device 800, the sensor component 814 can also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, the orientation of the electronic device 800, or the presence or absence of contact with the electronic device 800. Acceleration/deceleration and temperature change of electronic device 800.
- Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
- Sensor assembly 814 may also include a light sensor, such as a Complementary Metal-Oxide-Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications.
- CMOS Complementary Metal-Oxide-Semiconductor
- CCD Charge Coupled Device
- the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
- Communication component 816 is configured to facilitate wired or wireless communication between electronic device 800 and other devices.
- the electronic device 800 can access a wireless network based on a communication standard, such as Wireless Fidelity (Wi-Fi), the 2nd Generation (The 2nd Generation, 2G) or the 3rd Generation (The 3nd Generation) , 3G) or their combination.
- the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 also includes a Near Field Communication (NFC) module to facilitate short-range communication.
- the NFC module may be based on Radio Frequency Identification (RFID) technology, Infrared Data Association (IrDA) technology, Ultra Wide Band (UWB) technology, Bluetooth (Blue Tooth, BT) technology and other technologies to achieve.
- RFID Radio Frequency Identification
- IrDA Infrared Data Association
- UWB Ultra Wide Band
- Bluetooth Bluetooth
- the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (Digital Signal Processing Devices) , DSPD), Programmable Logic Device (PLD), Field Programmable Gate Array (FPGA), controller, microcontroller, microprocessor, or other electronic component implementation, used to perform the above method.
- ASICs Application Specific Integrated Circuits
- DSPs Digital Signal Processors
- DPD Digital Signal Processing Devices
- PLD Programmable Logic Device
- FPGA Field Programmable Gate Array
- controller microcontroller, microprocessor, or other electronic component implementation, used to perform the above method.
- a non-volatile computer-readable storage medium such as a memory 804 comprising computer program instructions executable by the processor 820 of the electronic device 800 to perform the above method is also provided.
- FIG. 5 shows a block diagram of an electronic device 1900 according to an embodiment of the present disclosure.
- the electronic device 1900 may be provided as a server.
- electronic device 1900 includes processing component 1922, which may include one or more processors, and memory resources represented by memory 1932 for storing instructions executable by processing component 1922, such as applications.
- An application program stored in memory 1932 may include one or more modules, each corresponding to a set of instructions.
- the processing component 1922 is configured to execute instructions to perform the above-described methods.
- the electronic device 1900 may also include a power supply assembly 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an I/O interface 1958.
- Electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, or the like.
- a non-volatile computer-readable storage medium such as memory 1932 comprising computer program instructions executable by processing component 1922 of electronic device 1900 to perform the above-described method.
- Embodiments of the present disclosure may be systems, methods and/or computer program products.
- the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the embodiments of the present disclosure.
- a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Computer-readable storage media may include: portable computer disks, hard disks, random access memory (RAM), read-only memory, erasable programmable read-only memory (EPROM or flash memory), static random access memory, Portable Compact Disc Read-Only Memory (CD-ROM), Digital Video Disc (DVD), memory sticks, floppy disks, mechanical coding devices, such as punch cards on which instructions are stored Or the protruding structure in the groove, and any suitable combination of the above.
- Computer-readable storage media, as used herein, are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
- the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
- Computer program instructions for performing operations of embodiments of the present disclosure may be assembly instructions, Industry Standard Architecture (ISA) instructions, machine instructions, machine-related instructions, pseudocode, firmware instructions, state setting data, or in a form of Source or object code written in any combination of programming languages, including object-oriented programming languages such as Smalltalk, C++, etc., as well as conventional procedural programming languages such as C or similar programming languages.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
- the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or Wide Area Network (WAN), or may be connected to an external computer (eg, using the Internet service provider to connect via the Internet).
- LAN Local Area Network
- WAN Wide Area Network
- electronic circuits such as programmable logic circuits, field programmable gate arrays, or programmable logic arrays, that can execute computer readable program instructions are personalized by utilizing state information of computer readable program instructions , thereby implementing various aspects of the embodiments of the present disclosure.
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
- These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
- Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
- the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.
- the computer program product can be implemented in hardware, software or a combination thereof.
- the computer program product may be embodied as a computer storage medium, and in another optional embodiment, the computer program product may be embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
- a software development kit Software Development Kit, SDK
- Embodiments of the present disclosure relate to an image processing method and apparatus, an electronic device, a storage medium, and a program product.
- the method includes: acquiring a first segmentation result of a target object in a first image; acquiring a second segmentation result of the target object in a second image; and obtaining the first segmentation result according to the first segmentation result and the second segmentation result.
- a deformation field between an image and the second image wherein the deformation field includes a positional transformation relationship of each pixel of the target object between the first image and the second image.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Image Analysis (AREA)
- Multi-Process Working Machines And Systems (AREA)
Abstract
Description
Claims (21)
- 一种图像处理方法,包括:An image processing method, comprising:获取第一图像中目标对象的第一分割结果;obtaining the first segmentation result of the target object in the first image;获取第二图像中所述目标对象的第二分割结果;obtaining the second segmentation result of the target object in the second image;根据所述第一分割结果与所述第二分割结果,得到所述第一图像与所述第二图像之间的形变场,其中,所述形变场包括所述目标对象在所述第一图像与所述第二图像之间的每个像素点的位置变换关系。According to the first segmentation result and the second segmentation result, a deformation field between the first image and the second image is obtained, wherein the deformation field includes the target object in the first image and the position transformation relationship of each pixel point between the second image.
- 根据权利要求1所述的方法,其中,所述根据所述第一分割结果与所述第二分割结果,得到所述第一图像与所述第二图像之间的形变场,包括:The method according to claim 1, wherein the obtaining a deformation field between the first image and the second image according to the first segmentation result and the second segmentation result comprises:将所述第一分割结果与所述第二分割结果输入至第一神经网络,得到所述第一图像与所述第二图像之间的形变场。The first segmentation result and the second segmentation result are input into a first neural network to obtain a deformation field between the first image and the second image.
- 根据权利要求1所述的方法,其中,所述第一图像包括三维图像,所述第二图像包括二维图像;The method of claim 1, wherein the first image comprises a three-dimensional image and the second image comprises a two-dimensional image;所述根据所述第一分割结果与所述第二分割结果,得到所述第一图像与所述第二图像之间的形变场,包括:The obtaining of the deformation field between the first image and the second image according to the first segmentation result and the second segmentation result includes:根据所述第二图像的采集信息,将所述第一分割结果转换为二维的第三分割结果;converting the first segmentation result into a two-dimensional third segmentation result according to the collection information of the second image;将所述第三分割结果与所述第二分割结果输入至第一神经网络,得到所述第一图像与所述第二图像之间的形变场。The third segmentation result and the second segmentation result are input into the first neural network to obtain the deformation field between the first image and the second image.
- 根据权利要求2或3中所述的方法,其中,所述方法还包括:The method according to claim 2 or 3, wherein the method further comprises:根据所述形变场,对所述第一图像与所述第二图像进行配准,得到配准结果。According to the deformation field, the first image and the second image are registered to obtain a registration result.
- 根据权利要求2或3所述的方法,其中,所述方法还包括:The method according to claim 2 or 3, wherein the method further comprises:根据所述形变场,获取所述第一神经网络的误差损失;obtaining the error loss of the first neural network according to the deformation field;根据所述误差损失,对所述第一神经网络进行训练。The first neural network is trained according to the error loss.
- 根据权利要求5所述的方法,其中,所述根据所述形变场,获取所述第一神经网络的误差损失,包括:The method according to claim 5, wherein the obtaining the error loss of the first neural network according to the deformation field comprises:根据所述形变场,对所述第一分割结果进行配准,得到配准后的第一分割结果,将所述配准后的第一分割结果与所述第二分割结果之间的误差作为所述第一神经网络的误差损失;或者,According to the deformation field, the first segmentation result is registered to obtain the registered first segmentation result, and the error between the registered first segmentation result and the second segmentation result is taken as the error loss of the first neural network; or,根据所述形变场,对所述第二分割结果进行配准,得到配准后的第二分割结果,将所述配准后的第二分割结果与所述第一分割结果之间的误差作为所述第一神经网络的误差损失;或者,According to the deformation field, the second segmentation result is registered to obtain the registered second segmentation result, and the error between the registered second segmentation result and the first segmentation result is taken as the error loss of the first neural network; or,根据所述形变场,对所述第一分割结果进行配准,得到配准后的第一分割结果,将所述配准后的第一分割结果与所述第二图像之间的误差作为所述第一神经网络的误差损失;或者,According to the deformation field, the first segmentation result is registered to obtain the registered first segmentation result, and the error between the registered first segmentation result and the second image is used as the the error loss of the first neural network; or,根据所述形变场,对所述第二分割结果进行配准,得到配准后的第二分割结果,将所述配准后的第二分割结果与所述第一图像之间的误差作为所述第一神经网络的误差损失。According to the deformation field, the second segmentation result is registered to obtain the registered second segmentation result, and the error between the registered second segmentation result and the first image is used as the The error loss of the first neural network is described.
- 根据权利要求2至6中任意一项所述的方法,其中,所述获取第一图像中目标对象的第一分割结果,包括:The method according to any one of claims 2 to 6, wherein the acquiring the first segmentation result of the target object in the first image comprises:将所述第一图像输入至第二神经网络,得到所述第一图像中所述目标对象的第一分割结果,其中,所述第二神经网络通过包含目标对象标注的第一训练图像进行训练;或者,Inputting the first image to a second neural network to obtain a first segmentation result of the target object in the first image, wherein the second neural network is trained by using the first training image marked with the target object ;or,将所述第一图像输入至所述第一神经网络,得到所述第一图像中所述目标对象的第一分割结果,其中,所述第一神经网络还用于根据所述第一分割结果与所述第二分割结果,得到所述第一图像与所述第二图像之间的形变场。Inputting the first image to the first neural network to obtain a first segmentation result of the target object in the first image, wherein the first neural network is further configured to obtain a first segmentation result according to the first segmentation result With the second segmentation result, a deformation field between the first image and the second image is obtained.
- 根据权利要求2至7中任意一项所述的方法,其中,所述获取第二图像中目标对象的第二分割结果,包括:The method according to any one of claims 2 to 7, wherein the acquiring the second segmentation result of the target object in the second image comprises:将所述第二图像输入至第三神经网络,得到所述第二图像中所述目标对象的第二分割结果,其中,所述第三神经网络通过包含目标对象标注的第二训练图像进行训练;或者,Inputting the second image into a third neural network to obtain a second segmentation result of the target object in the second image, wherein the third neural network is trained by using the second training image marked with the target object ;or,将所述第二图像输入至所述第一神经网络,得到所述第二图像中所述目标对象的第二分割结果,其中,所述第一神经网络还用于根据所述第一分割结果与所述第二分割结果,得到所述第一图像与所述第二图像之间的形变场。Inputting the second image to the first neural network to obtain a second segmentation result of the target object in the second image, wherein the first neural network is further configured to obtain a second segmentation result according to the first segmentation result With the second segmentation result, a deformation field between the first image and the second image is obtained.
- 根据权利要求1至8中任意一项所述的方法,其中,所述第一图像包括CTA图像,所述第二图像包括X光图像,所述目标对象包括冠状动脉对象。8. The method of any one of claims 1 to 8, wherein the first image comprises a CTA image, the second image comprises an X-ray image, and the target object comprises a coronary artery object.
- 一种图像处理装置,包括:An image processing device, comprising:第一分割模块,配置为获取第一图像中目标对象的第一分割结果;a first segmentation module, configured to obtain a first segmentation result of the target object in the first image;第二分割模块,配置为获取第二图像中目标对象的第二分割结果;a second segmentation module, configured to obtain a second segmentation result of the target object in the second image;形变场获取模块,配置为根据所述第一分割结果与所述第二分割结果,得到所述第一图像与所述第二图像之间的形变场,其中,所述形变场包括所述目标对象在所述第一图像与所述第二图像之间的每个像素点的位置变换关系。A deformation field acquisition module, configured to obtain a deformation field between the first image and the second image according to the first segmentation result and the second segmentation result, wherein the deformation field includes the target The position transformation relationship of each pixel of the object between the first image and the second image.
- 根据权利要求10所述的装置,其中,所述形变场获取模块还配置为:The apparatus according to claim 10, wherein the deformation field acquisition module is further configured to:将第一分割结果与第二分割结果输入至第一神经网络,得到第一图像与第二图像的形变场。The first segmentation result and the second segmentation result are input into the first neural network to obtain the deformation fields of the first image and the second image.
- 根据权利要求10所述的装置,其中,所述第一图像包括三维图像,所述第二图像包括二维图像;所述形变场获取模块还配置为:The apparatus according to claim 10, wherein the first image includes a three-dimensional image, and the second image includes a two-dimensional image; the deformation field acquisition module is further configured to:根据第二图像的采集信息,将第一分割结果转换为二维的第三分割结果;将第三分割结果与第二分割结果输入至第一神经网络,得到第一图像与第二图像之间的形变场。According to the collection information of the second image, the first segmentation result is converted into a two-dimensional third segmentation result; the third segmentation result and the second segmentation result are input into the first neural network, and the difference between the first image and the second image is obtained. deformation field.
- 根据权利要求11或12所述的装置,其中,所述装置还包括:The apparatus of claim 11 or 12, wherein the apparatus further comprises:配准模块,配置为根据所述形变场,对所述第一图像与所述第二图像进行配准,得到配准结果。The registration module is configured to perform registration on the first image and the second image according to the deformation field to obtain a registration result.
- 根据权利要求11或12所述的装置,其中,所述装置还包括:The apparatus of claim 11 or 12, wherein the apparatus further comprises:误差获取模块,配置为根据所述形变场,获取所述第一神经网络的误差损失;an error obtaining module, configured to obtain the error loss of the first neural network according to the deformation field;训练模块,配置为根据所述误差损失,对所述第一神经网络进行训练。A training module configured to train the first neural network according to the error loss.
- 根据权利要求11所述的装置,其中,The apparatus of claim 11, wherein,所述误差获取模块,还配置为根据所述形变场,对所述第一分割结果进行配准,得到配准后的第一分割结果,将所述配准后的第一分割结果与所述第二分割结果之间的误差作为所述第一神经网络的误差损失;或者,The error acquisition module is further configured to perform registration on the first segmentation result according to the deformation field, obtain a registered first segmentation result, and compare the registered first segmentation result with the The error between the second segmentation results is used as the error loss of the first neural network; or,根据所述形变场,对所述第二分割结果进行配准,得到配准后的第二分割结果,将所述配准后的第二分割结果与所述第一分割结果之间的误差作为所述第一神经网络的误差损失;或者,According to the deformation field, the second segmentation result is registered to obtain the registered second segmentation result, and the error between the registered second segmentation result and the first segmentation result is taken as the error loss of the first neural network; or,根据所述形变场,对所述第一分割结果进行配准,得到配准后的第一分割结果,将所述配准后的第一分割结果与所述第二图像之间的误差作为所述第一神经网络的误差损失;或者,According to the deformation field, the first segmentation result is registered to obtain the registered first segmentation result, and the error between the registered first segmentation result and the second image is used as the the error loss of the first neural network; or,根据所述形变场,对所述第二分割结果进行配准,得到配准后的第二分割结果,将所述配准后的第二分割结果与所述第一图像之间的误差作为所述第一神经网络的误差 损失。According to the deformation field, the second segmentation result is registered to obtain the registered second segmentation result, and the error between the registered second segmentation result and the first image is used as the The error loss of the first neural network is described.
- 根据权利要求11至15任意一项所述的装置,其中,The device according to any one of claims 11 to 15, wherein,所述第一分割模块还配置为将所述第一图像输入至第二神经网络,得到所述第一图像中所述目标对象的第一分割结果,其中,所述第二神经网络通过包含目标对象标注的第一训练图像进行训练;或者,The first segmentation module is further configured to input the first image into a second neural network to obtain a first segmentation result of the target object in the first image, wherein the second neural network includes the target object-labeled first training image for training; or,将所述第一图像输入至所述第一神经网络,得到所述第一图像中所述目标对象的第一分割结果,其中,所述第一神经网络还用于根据所述第一分割结果与所述第二分割结果,得到所述第一图像与所述第二图像之间的形变场。Inputting the first image to the first neural network to obtain a first segmentation result of the target object in the first image, wherein the first neural network is further configured to obtain a first segmentation result according to the first segmentation result With the second segmentation result, a deformation field between the first image and the second image is obtained.
- 根据权利要求11至16任意一项所述的装置,其中,The apparatus of any one of claims 11 to 16, wherein,所述第二分割模块,用于将所述第二图像输入至第三神经网络,得到所述第二图像中所述目标对象的第二分割结果,其中,所述第三神经网络通过包含目标对象标注的第二训练图像进行训练;或者,The second segmentation module is configured to input the second image into a third neural network to obtain a second segmentation result of the target object in the second image, wherein the third neural network includes the target object-labeled second training images for training; or,将所述第二图像输入至所述第一神经网络,得到所述第二图像中所述目标对象的第二分割结果,其中,所述第一神经网络还用于根据所述第一分割结果与所述第二分割结果,得到所述第一图像与所述第二图像之间的形变场。Inputting the second image into the first neural network to obtain a second segmentation result of the target object in the second image, wherein the first neural network is further configured to obtain a second segmentation result according to the first segmentation result With the second segmentation result, a deformation field between the first image and the second image is obtained.
- 根据权利要求10至17任意一项所述的装置,其中,所述第一图像包括CTA图像,所述第二图像包括X光图像,所述目标对象包括冠状动脉对象。18. The apparatus of any one of claims 10 to 17, wherein the first image includes a CTA image, the second image includes an X-ray image, and the target object includes a coronary artery object.
- 一种电子设备,包括:An electronic device comprising:处理器;processor;配置为存储所述处理器可执行指令的存储器;a memory configured to store instructions executable by the processor;其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至9中任意一项所述的方法。wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1-9.
- 一种计算机可读存储介质,所述存储介质中存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至9中任意一项所述的方法。A computer-readable storage medium, storing computer program instructions in the storage medium, the computer program instructions implementing the method of any one of claims 1 to 9 when executed by a processor.
- 一种计算机程序产品,所述程序产品中存储有计算机可读指令,所述计算机可读指令被执行时实现如权利要求1至9中任意一项所述的方法。A computer program product having computer-readable instructions stored in the program product, the computer-readable instructions implementing the method according to any one of claims 1 to 9 when executed.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2021578004A JP2022543531A (en) | 2020-07-16 | 2020-12-28 | Image processing method and device, electronic equipment, storage medium and program product |
KR1020217043238A KR20220016212A (en) | 2020-07-16 | 2020-12-28 | Image processing method and apparatus, electronic device, storage medium and program product |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010686919.6 | 2020-07-16 | ||
CN202010686919.6A CN111798498A (en) | 2020-07-16 | 2020-07-16 | Image processing method and device, electronic equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022011984A1 true WO2022011984A1 (en) | 2022-01-20 |
Family
ID=72807442
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2020/140330 WO2022011984A1 (en) | 2020-07-16 | 2020-12-28 | Image processing method and apparatus, electronic device, storage medium, and program product |
Country Status (5)
Country | Link |
---|---|
JP (1) | JP2022543531A (en) |
KR (1) | KR20220016212A (en) |
CN (1) | CN111798498A (en) |
TW (1) | TWI767614B (en) |
WO (1) | WO2022011984A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111798498A (en) * | 2020-07-16 | 2020-10-20 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
TWI790508B (en) * | 2020-11-30 | 2023-01-21 | 宏碁股份有限公司 | Blood vessel detecting apparatus and blood vessel detecting method based on image |
CN112651931B (en) * | 2020-12-15 | 2024-04-26 | 浙江大华技术股份有限公司 | Building deformation monitoring method and device and computer equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170098311A1 (en) * | 2014-03-21 | 2017-04-06 | U.S. Department Of Veterans Affairs | Graph search using non-euclidean deformed graph |
CN111080680A (en) * | 2019-12-29 | 2020-04-28 | 苏州体素信息科技有限公司 | Patient-oriented three-dimensional chest organ reconstruction method and system |
CN111091589A (en) * | 2019-11-25 | 2020-05-01 | 北京理工大学 | Ultrasonic and nuclear magnetic image registration method and device based on multi-scale supervised learning |
CN111260700A (en) * | 2020-01-09 | 2020-06-09 | 复旦大学 | Full-automatic registration and segmentation method for multi-parameter magnetic resonance image |
CN111798498A (en) * | 2020-07-16 | 2020-10-20 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5491929B2 (en) * | 2009-04-10 | 2014-05-14 | 株式会社東芝 | X-ray diagnostic apparatus and method |
US11938344B2 (en) * | 2018-07-24 | 2024-03-26 | Brainlab Ag | Beam path based patient positioning and monitoring |
CN111047629B (en) * | 2019-11-04 | 2022-04-26 | 中国科学院深圳先进技术研究院 | Multi-modal image registration method and device, electronic equipment and storage medium |
CN110930438B (en) * | 2019-11-22 | 2023-05-05 | 上海联影医疗科技股份有限公司 | Image registration method, device, electronic equipment and storage medium |
CN111161330B (en) * | 2019-12-20 | 2024-03-22 | 东软医疗系统股份有限公司 | Non-rigid image registration method, device, system, electronic equipment and storage medium |
-
2020
- 2020-07-16 CN CN202010686919.6A patent/CN111798498A/en active Pending
- 2020-12-28 KR KR1020217043238A patent/KR20220016212A/en active Search and Examination
- 2020-12-28 JP JP2021578004A patent/JP2022543531A/en active Pending
- 2020-12-28 WO PCT/CN2020/140330 patent/WO2022011984A1/en active Application Filing
-
2021
- 2021-03-16 TW TW110109421A patent/TWI767614B/en active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170098311A1 (en) * | 2014-03-21 | 2017-04-06 | U.S. Department Of Veterans Affairs | Graph search using non-euclidean deformed graph |
CN111091589A (en) * | 2019-11-25 | 2020-05-01 | 北京理工大学 | Ultrasonic and nuclear magnetic image registration method and device based on multi-scale supervised learning |
CN111080680A (en) * | 2019-12-29 | 2020-04-28 | 苏州体素信息科技有限公司 | Patient-oriented three-dimensional chest organ reconstruction method and system |
CN111260700A (en) * | 2020-01-09 | 2020-06-09 | 复旦大学 | Full-automatic registration and segmentation method for multi-parameter magnetic resonance image |
CN111798498A (en) * | 2020-07-16 | 2020-10-20 | 上海商汤智能科技有限公司 | Image processing method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
TW202203854A (en) | 2022-02-01 |
CN111798498A (en) | 2020-10-20 |
TWI767614B (en) | 2022-06-11 |
KR20220016212A (en) | 2022-02-08 |
JP2022543531A (en) | 2022-10-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022011984A1 (en) | Image processing method and apparatus, electronic device, storage medium, and program product | |
WO2021147257A1 (en) | Network training method and apparatus, image processing method and apparatus, and electronic device and storage medium | |
WO2021051965A1 (en) | Image processing method and apparatus, electronic device, storage medium, and computer program | |
JP2022537974A (en) | Neural network training method and apparatus, electronic equipment and storage medium | |
CN112767329B (en) | Image processing method and device and electronic equipment | |
WO2021259391A2 (en) | Image processing method and apparatus, and electronic device and storage medium | |
CN112541928A (en) | Network training method and device, image segmentation method and device and electronic equipment | |
WO2022007342A1 (en) | Image processing method and apparatus, electronic device, storage medium, and program product | |
WO2021103554A1 (en) | Image positioning interactive display method, apparatus, electronic device, and storage medium | |
CN113222038B (en) | Breast lesion classification and positioning method and device based on nuclear magnetic image | |
KR102108418B1 (en) | Method for providing an image based on a reconstructed image group and an apparatus using the same | |
TWI765386B (en) | Neural network training and image segmentation method, electronic device and computer storage medium | |
CN113012166A (en) | Intracranial aneurysm segmentation method and device, electronic device, and storage medium | |
US20110122068A1 (en) | Virtual colonoscopy navigation methods using a mobile device | |
CN113469948B (en) | Left ventricle segment identification method and device, electronic equipment and storage medium | |
CN113902730A (en) | Image processing and neural network training method and device | |
Hsieh et al. | Markerless Augmented Reality via Stereo Video See‐Through Head‐Mounted Display Device | |
CN112767541A (en) | Three-dimensional reconstruction method and device, electronic equipment and storage medium | |
WO2022012038A1 (en) | Image processing method and apparatus, electronic device, storage medium and program product | |
WO2022179014A1 (en) | Image processing method and apparatus, electronic device, storage medium, and program product | |
WO2021120603A1 (en) | Target object display method and apparatus, electronic device and storage medium | |
Gong et al. | Intensity-mosaic: automatic panorama mosaicking of disordered images with insufficient features | |
CN113192606A (en) | Medical data processing method and device, electronic equipment and storage medium | |
CN113553460B (en) | Image retrieval method and device, electronic device and storage medium | |
WO2023032436A1 (en) | Medical image processing device, medical image processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2021578004 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20217043238 Country of ref document: KR Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20945506 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20945506 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.07.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20945506 Country of ref document: EP Kind code of ref document: A1 |