CN114299547A - Method and system for determining region of target object - Google Patents

Method and system for determining region of target object Download PDF

Info

Publication number
CN114299547A
CN114299547A CN202111649411.XA CN202111649411A CN114299547A CN 114299547 A CN114299547 A CN 114299547A CN 202111649411 A CN202111649411 A CN 202111649411A CN 114299547 A CN114299547 A CN 114299547A
Authority
CN
China
Prior art keywords
radiation
target object
information
determining
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111649411.XA
Other languages
Chinese (zh)
Inventor
钟健
冯娟
徐亮
马艳歌
袁洲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai United Imaging Healthcare Co Ltd
Original Assignee
Shanghai United Imaging Healthcare Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai United Imaging Healthcare Co Ltd filed Critical Shanghai United Imaging Healthcare Co Ltd
Priority to CN202111649411.XA priority Critical patent/CN114299547A/en
Publication of CN114299547A publication Critical patent/CN114299547A/en
Priority to PCT/CN2022/143012 priority patent/WO2023125720A1/en
Priority to EP22914970.3A priority patent/EP4330935A1/en
Pending legal-status Critical Current

Links

Images

Abstract

The embodiment of the specification provides a method and a system for determining the area where a target object is located. The method comprises the following steps: acquiring a target image, wherein the target image comprises at least one target object and radioactive medical equipment; determining target object information based on the target image; wherein the target object information includes a target object location; determining radiation region information of the radioactive medical device based on the target image; determining whether a target object is close to or located in a radiation area based on the target object information and the radiation area information; wherein the determining radiation region information of the radioactive medical device and/or the determining target object information is at least partially achieved by machine learning.

Description

Method and system for determining region of target object
Technical Field
The present disclosure relates to the medical field, and in particular, to a method and a system for determining a region where a target object is located.
Background
In medical imaging examinations or surgical navigation using a radioactive medical device (e.g., a CT device, a C-arm device, a DR device, an RT device, or a PET device), scattered rays or ray bundles may be irradiated to medical staff during patient imaging, which are not detected at all by the medical staff, resulting in a potential radiation risk to the medical staff during multiple imaging.
Therefore, it is desirable to provide a method and a system for determining the region where the target object is located, which prompt the target object to be far away from the irradiation region when the target object is close to or located in the irradiation region.
Disclosure of Invention
One aspect of the present specification provides a method for determining a region where a target object is located, the method including: acquiring a target image, wherein the target image comprises at least one target object and radioactive medical equipment; determining target object information based on the target image; wherein the target object information includes a target object location; determining radiation region information of the radioactive medical device based on the target image; determining whether a target object is close to or located in a radiation area based on the target object information and the radiation area information; wherein the determining radiation region information of the radioactive medical device and/or the determining target object information is at least partially achieved by machine learning.
Another aspect of the present description provides a system for determining a region in which a target object is located. The system comprises: an acquisition module for acquiring a target image, the target image including at least one target object and a radioactive medical device; a determination module for determining target object information based on the target image; wherein the target object information includes a target object location; determining radiation region information of the radioactive medical device based on the target image; determining whether a target object is close to or located in a radiation area based on the target object information and the radiation area information; wherein the determining radiation region information of the radioactive medical device and/or the determining target object information is at least partially achieved by machine learning.
Another aspect of the present specification provides an apparatus for determining a region in which a target object is located, the apparatus comprising at least one processor and at least one storage device, the storage device being configured to store instructions that, when executed by the at least one processor, implement the method as described above.
Another aspect of the present specification provides a computer-readable storage medium storing computer instructions which, when read by a computer, cause the computer to perform the method as described above.
Drawings
The present description will be further explained by way of exemplary embodiments, which will be described in detail by way of the accompanying drawings. These embodiments are not intended to be limiting, and in these embodiments like numerals are used to indicate like structures, wherein:
FIG. 1 is a schematic diagram of an application scenario of a target object location area determination system according to some embodiments of the present description;
FIG. 2 is an exemplary block diagram of a target object location area determination system according to some embodiments of the present description;
FIG. 3 is an exemplary flow diagram of a method for determining a region in which a target object is located according to some embodiments of the present description;
FIG. 4 is a schematic illustration of an exemplary target image shown in accordance with some embodiments of the present description;
FIG. 5 is an exemplary flow diagram of a method of machine learning model training, shown in some embodiments herein.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system", "apparatus", "unit" and/or "module" as used herein is a method for distinguishing different components, elements, parts, portions or assemblies at different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this specification to illustrate operations performed by systems according to embodiments of the specification, with relevant descriptions to facilitate a better understanding of medical imaging methods and/or systems. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to the processes, or a certain step or several steps of operations may be removed from the processes.
Fig. 1 is a schematic diagram of an application scenario of a target object location area determination system 100 according to some embodiments of the present disclosure.
As shown in fig. 1, the target object located area determination system 100 may include an image pickup device 110, a network 120, at least one terminal 130, a processing device 140, and a storage device 150. The various components of the system 100 may be interconnected by a network 120. For example, the image pickup apparatus 110 and the at least one terminal 130 may be connected or communicate through the network 120.
The image capture device 110 may include a video camera 111, a still camera 112, a video camera 113, and the like. In some embodiments, the camera device 110 may be a 2D camera device, a 3D camera device, or the like. In some embodiments, the image capture device 110 may be installed in a target scene, and capture the target scene to obtain a target image of the target scene. In some embodiments, the target scenario may be any scenario that requires monitoring or supervision. In some embodiments, the target scene may be a medical imaging examination scene, or a scene for surgical navigation using a medical device. In some embodiments, a radioactive medical device (not shown in fig. 1) may also be included in the target scene. The imaging device 110 captures a medical image examination scene or a medical device surgery navigation scene, and obtains a target image of the scene to determine whether a target object is close to or in a radiation area.
Network 120 may include any suitable network capable of facilitating information and/or data exchange with system 100 for determining the area in which a target object is located. In some embodiments, at least one component of the target object location area determination system 100 (e.g., the camera device 110, the processing device 140, the storage device 150, the at least one terminal 130) may exchange information and/or data with at least one other component of the target object location area determination system 100 via the network 120. For example, the processing device 140 may obtain the target image from the image capturing device 110 through the network 120. As another example, the processing device 140 may obtain current radiation information (e.g., bulb position, detector position, reference object body thickness, and/or radiation parameters) from the radioactive medical device via the network 120. Network 120 may alternatively comprise a public network (e.g., the internet), a private network (e.g., a Local Area Network (LAN)), a wired network, a wireless network (e.g., an 802.11 network, a Wi-Fi network), a frame relay network, a Virtual Private Network (VPN), a satellite network, a telephone network, a router, a hub, a switch, a server computer, and/or any combination thereof. For example, network 120 may include a wireline network, a fiber optic network, a telecommunications network, an intranet, a Wireless Local Area Network (WLAN), a Metropolitan Area Network (MAN), a Public Switched Telephone Network (PSTN), Bluetooth, and a network interfaceTMNetwork and ZigBeeTMA network, a Near Field Communication (NFC) network, the like, or any combination thereof. In some embodiments, network 120 may include at least one network access point. For example, network 120 may include wired and/or wireless network access points, such as base stations and/or internet exchange points, through which at least one component of system 100 may connect to network 120 to exchange data and/or information.
The at least one terminal 130 may be in communication with and/or connected to the imaging device 110, the processing device 140, the storage device 150, and/or the radioactive medical device. For example, the photographer can adjust current shooting parameters (e.g., shooting angle, focal length, field angle, aperture, etc.) of the image capturing apparatus 110 through the at least one terminal 130. For another example, the photographer may input the photographing parameters through the at least one terminal 130 and be stored in the storage device 150 by the processing device 140. For another example, the photographing parameters may be displayed on the terminal 130. As another example, the photographer may obtain or adjust current radiation information of the radioactive medical device through the at least one terminal 130. In some embodiments, at least one terminal 130 may include a mobile device 131, a tablet computer 132, a laptop computer 133, and the like, or any combination thereof. For example, mobile device 131 may include a mobile joystick, a Personal Digital Assistant (PDA), a smart phone, or the like, or any combination thereof.
In some embodiments, at least one terminal 130 may include an input device, an output device, and the like. The input device may be selected from keyboard input, touch screen (e.g., with tactile or haptic feedback) input, voice input, eye tracking input, gesture tracking input, brain monitoring system input, image input, video input, or any other similar input mechanism. Input information received via the input device may be transmitted, for example, via a bus, to the processing device 140 for further processing. Other types of input devices may include cursor control devices such as a mouse, a trackball, or cursor direction keys, among others. In some embodiments, the photographer may input the photographing parameters through an input device. Output devices may include a display, speakers, printer, indicator lights, etc., or any combination thereof. In some embodiments, an output device may be used to output shooting parameters and the like. In some embodiments, the output device may receive instructions from the processing device 140 to prompt (e.g., voice broadcast, flashing indicator lights, beeping, etc.) the target object when it is near or within the irradiation zone. In some embodiments, at least one terminal 130 may be part of the processing device 140.
The processing device 140 may process data and/or information obtained from the imaging device 110, the storage device 150, the at least one terminal 130, or other components of the target object located region determination system 100 (e.g., a radioactive medical device). For example, the processing device 140 may acquire a target image from the image capturing device 110. As another example, the processing device 140 determines target object information and/or radiation region information based on the target image. As another example, the processing device 140 may obtain current radiation information from the radioactive medical device or the storage device 150 and determine radiation region information based on the current radiation information. For another example, the processing device 140 determines whether the target object is close to or located in the radiation area based on the target object information and the radiation area information, and controls the at least one terminal 130 to prompt when the target object is close to or located in the radiation area. In some embodiments, the processing device 140 may be a single server or a group of servers. The server groups may be centralized or distributed. In some embodiments, the processing device 140 may be local or remote. For example, the processing device 140 may access information and/or data from the camera device 110, the storage device 150, and/or the at least one terminal 130 via the network 120. As another example, the processing device 140 may be directly connected to the camera device 110, the at least one terminal 130, and/or the storage device 150 to access information and/or data. In some embodiments, the processing device 140 may be implemented on a cloud platform. For example, the cloud platform may include a private cloud, a public cloud, a hybrid cloud, a community cloud, a distributed cloud, an inter-cloud, a multi-cloud, and the like, or any combination thereof.
Storage device 150 may store data, instructions, and/or any other information. For example, historical photographing parameters, historical target images, current radiation information, and the like. In some embodiments, the storage device 150 may store data obtained from the camera device 110, the at least one terminal 130, and/or the processing device 140. In some embodiments, storage device 150 may store data and/or instructions that processing device 140 uses to perform or use to perform the exemplary methods described in this specification. In some embodiments, the storage device 150 may include mass storage, removable storage, volatile read-write memory, read-only memory (ROM), and the like, or any combination thereof. In some embodiments, the storage device 150 may be implemented on a cloud platform.
In some embodiments, the storage device 150 may be connected to the network 120 to communicate with at least one other component (e.g., the processing device 140, the at least one terminal 130) in the target object located zone determination system 100. At least one component of the target object location area determination system 100 may access data (e.g., historical photographic parameters) stored in the storage device 150 via the network 120. In some embodiments, the storage device 150 may be part of the processing device 140.
It should be noted that the foregoing description is provided for illustrative purposes only, and is not intended to limit the scope of the present description. Many variations and modifications may be made by one of ordinary skill in the art in light of the teachings of this specification. The features, structures, methods, and other features of the exemplary embodiments described herein may be combined in various ways to obtain additional and/or alternative exemplary embodiments. For example, the storage device 150 may be a data storage device comprising a cloud computing platform, such as a public cloud, a private cloud, a community and hybrid cloud, and the like. However, such changes and modifications do not depart from the scope of the present specification.
FIG. 2 is an exemplary block diagram of a target object location area determination system according to some embodiments of the present description.
As shown in fig. 2, in some embodiments, the target object location area determination system 200 may include an acquisition module 210, a determination module 220, and a prompt module 230. In some embodiments, one or more modules in the target object location area determination system 200 may be interconnected. The connection may be wireless or wired. At least a portion of the target object located area determination system 200 may be implemented on the processing device 140 or the terminal 130 as shown in fig. 1.
The acquisition module 210 may be used to acquire a target image. In some embodiments, at least one target object and a radioactive medical device may be included in the target image. For more details about obtaining the target image, reference may be made to the flowchart of fig. 3 and the description thereof, which are not described herein again.
The determination module 220 may be used to determine whether the target object is near or at the irradiation zone. In some embodiments, the determination module 220 may be configured to determine target object information based on the target image. In some embodiments, the target object information may include a target object location. In some embodiments, the determination module 220 may be configured to determine radiation region information of the radioactive medical device based on the target image. In some embodiments, the determination module 220 may be configured to determine whether the target object is near or within the irradiation region based on the target object information and the irradiation region information. In some embodiments, determining radiation region information of a radioactive medical device and/or determining target object information may be accomplished, at least in part, through machine learning. For more details on determining whether the target object is close to or located in the irradiation region, reference may be made to the flowchart of fig. 3 and the description thereof, which are not repeated herein.
The prompting module 230 can be used to prompt in response to the target object being near or at the irradiation region. For more contents of prompting, reference may be made to the flowchart in fig. 3 and the description thereof, which are not described herein again.
In some embodiments, the system 200 for determining the region where the target object is located may further include a model training module 240. In some embodiments, the model training module 240 may be configured to obtain training samples and train the initial model based on the training samples and the labeling results, respectively, to obtain a first machine learning model and a second machine learning model. For more details on the machine learning model training, reference may be made to fig. 5 and the description thereof, which are not described herein again.
It should be noted that the above description of the target object located region determining system 200 is for illustrative purposes only and is not intended to limit the scope of the present application. Various modifications and adaptations may occur to those skilled in the art in light of the present application. However, such changes and modifications do not depart from the scope of the present application. For example, one or more modules of the above-described target object location area determination system 200 may be omitted or integrated into a single module. As another example, the target object located region determination system 200 may include one or more additional modules, such as a storage module for data storage. For another example, the system 200 for determining the region where the target object is located may omit the prompt module 230, and only include the obtaining module 210, the determining module 220, and the model training module 240.
Fig. 3 is an exemplary flow chart of a method for determining a region in which a target object is located according to some embodiments of the present description. FIG. 4 is a schematic illustration of an exemplary target image shown in accordance with some embodiments of the present description.
The process 300 may be performed by the processing device 140. For example, the process 300 may be implemented as a set of instructions (e.g., an application) stored in a memory external to, and accessible by, the storage device 150, the target object located zone determination system (e.g., the target object located zone determination system 100 or the target object located zone determination system 200), for example. The processing device 140 may execute a set of instructions and, when executing the instructions, may be configured to perform the flow 300. The operational schematic of flow 300 presented below is illustrative. In some embodiments, the process may be accomplished with one or more additional operations not described and/or one or more operations not discussed. Additionally, the order in which the operations of flow 300 are illustrated in FIG. 3 and described below is not intended to be limiting.
Step 310, a target image is acquired. In some embodiments, step 310 may be performed by processing device 140 or acquisition module 210.
In some embodiments, the target image may be a captured image of the target scene. In some embodiments, the target scene may be a scene for medical imaging examination using a radioactive medical device (e.g., a CT device, a C-arm device, a DR device, an RT device, or a PET device), or for surgical navigation.
In some embodiments, the target image may contain one or more target objects and a radioactive medical device. In some embodiments, the target object may be an object to be determined in the target image. In some embodiments, the target object may comprise a medical professional. Such as a doctor or nurse.
In some embodiments, the target image may comprise a two-dimensional image. In some embodiments, the target image may comprise a three-dimensional image. In some embodiments, the target image may include one or more images. In some embodiments, the target image may be captured by the imaging device 110.
In some embodiments, the processing device 140 may acquire the target image by the imaging device 110. In some embodiments, the processing device 140 may acquire a two-dimensional image containing the target object directly through the 2D camera device. In some embodiments, the processing device 140 may acquire a three-dimensional image containing the target object directly through the 3D camera device. Specifically, the 3D camera device can accurately detect the distance from each point in the image to the camera, thereby obtaining the three-dimensional space coordinates of each point in the image, and then modeling through the three-dimensional space coordinates to obtain a three-dimensional image (i.e., a three-dimensional model) including the target object. In some embodiments, the processing device 140 may further acquire two-dimensional images of the target object through several 2D imaging devices, and then perform three-dimensional reconstruction according to the two-dimensional image data, so as to obtain a three-dimensional image containing the target object.
Step 320, determining target object information based on the target image. In some embodiments, step 320 may be performed by processing device 140 or determination module 220.
In some embodiments, the target object information may include a target object location. In some embodiments, the target object position may be coordinates of the target object. In some embodiments, the coordinates of the target object may be coordinates in a rectangular coordinate system established with a point in the target image as an origin. In some embodiments, the coordinates of the target object may also be coordinates in a rectangular coordinate system or a three-dimensional coordinate system established with a point in the target scene as an origin. In some embodiments, the coordinates of the target object may be a set of coordinate points on a rectangular frame of an area where the target object is located, may also be coordinates of a center point of the target object, and may also be a set of coordinate points on an edge of the rectangular frame of the area where the target object is located. For example, the coordinates of the target object in fig. 4 may be a set of coordinate points on the rectangular frame a, coordinates of a center point of the rectangular frame a, or a set of coordinate points of sides (for example, sides on the right) of the rectangular frame a.
In some embodiments, the target object information may also include a target object category, a target object location, and/or a confidence level. In some embodiments, the target object categories may be divided by responsibility. In some embodiments, the target object categories may include doctors, nurses, and the like. In some embodiments, the confidence level may indicate a probability that the target object is present at the target object location. For example, in FIG. 4 there is a probability of a healthcare worker being present in border A. In some embodiments, the confidence level may be set according to the accuracy indicated by actual needs. For example, when the accuracy of indicating that the target object exists at the target object position is high, a high confidence level may be set to 95%; when it is not necessary to indicate that the target object is present at the target object position with a high degree of accuracy, the confidence level may be set to 60%.
In some embodiments, the processing device 140 may determine target object information based on the target image. In some embodiments, the processing device 140 may process the target image using the first machine learning model to determine target object information. In particular, the processing device 140 may input the target image into a first machine learning model that outputs the target object location, the target object category, and the corresponding confidence level. In some embodiments, the target object position, the target object category, and the corresponding confidence level output by the first machine learning model may be displayed by using a marked frame or a marked point on the target image for the photographer to observe. In some embodiments, the first machine learning model may include a deep learning model. In some embodiments, the deep learning models can include a Yolo series deep learning model, an SSD model, an SPP-Net model, an R-CNN model, a Fast R-CNN model, a Faster R-CNN model, an R-FCN model, and the like.
In some embodiments, the processing device 140 may further obtain data such as a target image, a target object position, a target object category, and a corresponding confidence level, to serve as a training sample to iteratively update the first machine learning model, so as to improve the accuracy of the first machine learning model in outputting the target object information. For more details on the first machine learning model training method, refer to fig. 5 and its related description, which are not repeated herein.
In some embodiments, the processing device 140 may also determine target object information based on the target image through image recognition techniques.
Based on the target image, radiation region information of the radioactive medical device is determined, step 330. In some embodiments, step 330 may be performed by processing device 140 or determination module 220.
In some embodiments, the radiation region information of the radioactive medical device may be information related to a radiation region generated by the radioactive medical device. In some embodiments, the radiation region may be a region where the radiation dose is greater than a dose threshold. In some embodiments, the irradiation region may be a regular or irregular sphere or circle. It will be appreciated that for a three-dimensional target image, the radiation regions may be regular or irregular spheres; for a two-dimensional target image, the radiation area may be a regular or irregular circle. In some embodiments, the radiation region information may include a set of coordinates within the radiation region, a radiation dose for each coordinate within the radiation region.
In order to more clearly illustrate the radiation area and the related content of the radiation area information, the following is made by taking the two-dimensional target image in fig. 4 as an example. It should be understood that the above description with respect to fig. 4 is provided for illustrative purposes only, and is not intended to limit the scope of the present specification, and the technical solutions in the embodiments of the present specification may also be applied to a three-dimensional target image or a three-dimensional irradiation region.
As shown in fig. 4, the bulb 410 of the C-arm device may emit an X-ray beam toward a reference object (e.g., a patient), which is received by the detector 430 after passing through the reference object and the scanning bed 420. The X-ray beam scatters as it passes through the reference object and the scanning bed 420 to produce scattered radiation, as well as radiation from the X-ray beam itself, thereby forming a radiation field in the vicinity of the reference object that forms a two-dimensional circular radiation field centered about the reference object. Since the target object (the medical staff) is generally located opposite the C-arm when the C-arm device is in use (for example, the medical staff and the C-arm are located on the left and right sides of the reference object, respectively, in fig. 4), only the radiation area near the target object, i.e., the area on the right side of the arc C in fig. 4, may be considered in determining the two-dimensional radiation area. Since the left region of arc C is far from the reference object, its radiation dose is less than the dose threshold, and it can be considered as a non-radiation region. In some embodiments, in the two-dimensional target image shown in fig. 4, the irradiation region information may be the coordinates of each point on the arc C of the edge of the irradiation region and the region on the right side thereof, and the irradiation dose of each point.
In order to ensure that the medical staff is protected from radiation, the medical staff is required to be prevented from entering the radiation area where the radiation field is located as far as possible, and therefore, more accurate radiation area information, particularly the coordinates of the arc line C at the edge of the radiation area, needs to be determined.
In some embodiments, the processing device 140 may obtain the radiation parameters from the radioactive medical device or the storage device 150 as the first radiation information. In some embodiments, the radiation parameters may include one or more of tube voltage of the bulb, tube current, effective time of the pulse, radiation dose of the radiation, incident area of the radiation.
In some embodiments, the processing device 140 may further determine second radiation information based on the target image; and determining radiation region information based on the first radiation information and the second radiation information.
In some embodiments, the processing device 140 may process the target image using the first machine learning model to determine the second radiation information. In some embodimentsThe second radiation information may comprise at least a bulb position, a detector position, a reference object position and/or a reference object body thickness. In some embodiments, the bulb position may be the coordinates of the bulb in the coordinate system in which the target image is located. For example, the coordinates of the bulb 410 in the coordinate system of the plane in which the target image is located as shown in fig. 4. In some embodiments, the detector position may be the coordinates of the detector in the coordinate system in which the target image is located. For example, the coordinates of the detector 430 in the planar coordinate system of the target image as shown in FIG. 4. In some embodiments, the reference object may be a patient or an examinee. In some embodiments, the reference object position may be coordinates of the reference object in a coordinate system in which the target image is located. For example, the coordinates of the reference object in the plane coordinate system in which the target image is located as shown in fig. 4. In some embodiments, the coordinates may be a set of coordinate points on a rectangular frame of an area where the bulb, the detector and/or the reference object are located, or may be coordinates of a center point of the bulb, the detector and/or the reference object. For example, the coordinates of the reference object in fig. 4 may be a set of coordinate points on the rectangular frame B, or may be the center point B of the reference object0The coordinates of (a). In some embodiments, the reference object body thickness may be the thickness of the reference object in the coordinate system in which the target image is located. For example, the height of the rectangular frame in which the reference object is located in the plane coordinate system of the target image as shown in fig. 4. In some embodiments, the first radiation information and the second radiation information may be current radiation information.
In some embodiments, the processing device 140 may input the target image into a first machine learning model that outputs the bulb position, the probe position, and/or the reference object position. In some embodiments, the position of the bulb, the position of the probe, and/or the position of the reference object output by the first machine learning model may be displayed by using a mark frame or mark point on the target image for the photographer to observe. For more details on the first machine learning model, refer to step 320 and its related description, which are not repeated herein.
In some embodiments, the processing device 140 may further obtain a target image and data of a position of the bulb, a position of the detector, and/or a position of the reference object, so as to perform an iterative update on the first machine learning model as a training sample, so as to improve the accuracy of the first machine learning model in outputting the current radiation information. For more details on the first machine learning model training method, refer to fig. 5 and its related description, which are not repeated herein.
In some embodiments, the processing device 140 may obtain current radiation information of the radiopharmaceutical device directly from the radiopharmaceutical device or the storage device 150 based on the target image.
In some embodiments, the processing device 140 may determine the radiation region information based on the current radiation information.
In some embodiments, the processing device 140 may obtain a reference radiation information table. In some embodiments, the reference radiation information table may reflect a mapping relationship of the current radiation information and the radiation distance with the reference object as a base point. In some embodiments, the radiation distance from the reference object as a base point may be the distance from the reference object position to the edge of the radiation area. For example, reference object center B in FIG. 40Distance from each point on the arc C at the edge of the radiating area. In some embodiments, to improve processing or computational efficiency of the processing device 140, the radiation distance from the reference object as a base point may be the reference object center B0The radiation distance d in the horizontal direction from the arc C of the edge of the radiation area. In some embodiments, as shown in FIG. 4, the reference radiation information table may contain the position of the bulb 410, the position of the detector 430, and the position B of the reference object0And the mapping relation between the current radiation information such as the body thickness of the reference object, the radiation parameters and the like and the radiation distance d. In some embodiments, as shown in FIG. 4, the reference radiation information table may contain the position of the bulb 410, the position B of the reference object0And the mapping relation between the current radiation information such as radiation parameters and the radiation distance d.
In some embodiments, the processing device 140 may create a reference radiation information table by establishing a mapping relationship between the position of the bulb, the position of the detector, the position of the reference object, and the radiation distance through a plurality of experimental tests in advance. In some embodiments, the reference radiation information table may further include experimental radiation doses of the radioactive device, experimental shooting parameters (e.g., shooting angle, focal length, field angle, aperture, etc.) of the image pickup device 110, and the like.
In some embodiments, the imaging device 110 may be adjusted to perform multiple radiation experiments for different experimental imaging parameters and different experimental radiation doses for the radioactive medical device, while measuring the experimental radiation dose within a certain distance from the experimental reference object using a radiation dose testing device (e.g., an X-ray dosimeter). In some embodiments, the processing device 140 may determine, according to the above-mentioned multiple experimental data, a corresponding experimental radiation zone, which includes an arc of the edge of the experimental radiation zone (e.g., arc C in fig. 4) and an experimental radiation distance in the horizontal direction from the center of the experimental reference object to the arc of the experimental radiation zone. In some embodiments, the processing device 140 may further perform multiple sets of interpolation processing on the basis of the above experimental data, so as to further refine the reference radiation information table.
In some embodiments, the processing device 140 may determine the radiation region information based on the current radiation information and a reference radiation information table. In some embodiments, the processing device 140 may select, based on the current radiation information, an experimental shooting parameter and an experimental radiation dose that are the same as the shooting parameter of the imaging device 110 and the radiation dose of the radioactive medical device in the current scene from the reference radiation information table, and determine an arc line of an edge of an experimental radiation region and an experimental radiation distance corresponding to the experimental shooting parameter and the experimental radiation dose, that is: in the current scene, the arc line of the edge of the irradiation area and the irradiation distance with the reference object as a base point.
Because the experimental shooting parameters and the experimental radiation dose of the imaging device 110 and the radioactive medical device in the reference radiation information table are consistent or similar to the shooting parameters of the imaging device 110 and the radiation dose of the radioactive medical device when the target image is shot in the current scene, the determined radiation area information is more accurate.
In some embodiments, the processing device 140 may process the current radiation information using a second machine learning model to determine radiation region information. In particular, the processing device 140 may input current radiation information into the second machine learning model, which outputs radiation region information. In some embodiments, the arcs and radial distances of the edges of the radial regions output by the second machine learning model can be displayed by using marked lines or marked points on the target image, so as to be convenient for the photographer to observe. In some embodiments, the second machine learning model may comprise a neural network model. In some embodiments, the Neural Network model may include a Convolutional Recurrent Neural Network (CRNN), a Convolutional Neural Network (CNN), a Deep Convolutional Neural Network (DCNN), a Recurrent Neural Network (RNN), and the like.
In some embodiments, data such as current radiation information and radiation region information may also be obtained to serve as a training sample to perform iterative update on the second machine learning model, so as to improve the accuracy of the second machine learning model in outputting radiation region information. For more details on the second machine learning model training method, refer to fig. 5 and its related description, which are not repeated herein.
As can be seen from the above description, the processing device 140 may process the target image by using the first machine learning model to determine the second radiation information, or the processing device 140 may directly obtain the first radiation information of the radioactive medical device from the radioactive medical device or the storage device 150 based on the target image, and the first radiation information and the second radiation information are used as the current radiation information; and the processing device 140 may determine the radiation region information based on the current radiation information and the reference radiation information table, or the processing device 140 may process the current radiation information using a second machine learning model to determine the radiation region information. Thus, in determining radiation region information of a radioactive medical device, at least part or all of the steps or processes are implemented by machine learning models.
Step 340, determining whether the target object is close to or in the radiation area based on the target object information and the radiation area information. In some embodiments, step 340 may be performed by processing device 140 or determination module 220.
In some embodiments, the processing device 140 may determine a separation distance of the target object from the reference object based on the target object position and the reference object position. For more details on the target object position and the reference object position, reference may be made to the descriptions of step 320 and step 330, which are not described herein. In some embodiments, the processing device 140 may determine the separation distance in the horizontal direction between the target object and the reference object based on the coordinate point set of the side of the rectangular frame of the region in which the target object is located and the center point coordinates of the reference object. For example, the center point B of the reference object in FIG. 40A horizontal distance D from the right of the rectangular box a.
In some embodiments, the processing device 140 may determine that the target object is in the irradiation region when the separation distance is less than the irradiation distance. For example, in fig. 4, when the separation distance D is less than the radiation distance D, the processing device 140 may determine that the target object is in the radiation region. In some embodiments, the processing device 140 may determine that the target object is not in the irradiation region when the separation distance is equal to or greater than the irradiation distance. In this specification, "in the radiation region" may also be referred to as "in the radiation region".
In some embodiments, the processing device 140 may determine that the target object is proximate to the radiation zone when the separation distance is greater than the radiation distance and the difference from the radiation distance is less than a distance threshold (e.g., 1cm, 5cm, or 10 cm).
In response to the target object approaching or being located in the irradiation region, a prompt is made, step 350. In some embodiments, step 350 may be performed by processing device 140 or hinting module 230.
In some embodiments, the prompting means may include voice announcement, flashing of an indicator light, beeping, etc.
In some embodiments, the processing device 140 may control an output device (e.g., an audio device, an indicator light, etc.) of the terminal 130 to prompt when the target object is near or in the irradiation zone. In some embodiments, the processing device 140 may send a prompt instruction to the terminal 130 to cause its output device to voice-report, flash an indicator light, or beep.
When the target object is close to or positioned in the radiation area, the target object is reminded to be far away from the radiation area by sending a prompt, so that the target object can be subjected to the radiation in a reduced mode.
It should be noted that the above description of flow 300 is provided for illustrative purposes only and is not intended to limit the scope of the present application. Various changes and modifications will occur to those skilled in the art based on the description herein. In some embodiments, flow 300 may include one or more additional operations, or may omit one or more of the operations described above. For example, step 320 and step 330 may be integrated into one operation. For another example, the technical solution of determining whether the target object is close to or located in the radiation region in the two-dimensional space in the present specification is applied to the three-dimensional space. As another example, process 300 may omit step 350. For another example, the separation distance may also be stored in the storage device 150 in response to the target object approaching or being located in the irradiation region, so as to facilitate subsequent other operations (e.g., medical image examination of the target object or analysis of the operation process while performing surgery). However, such changes and modifications do not depart from the scope of the present application.
FIG. 5 is an exemplary flow diagram of a method of machine learning model training, shown in some embodiments herein.
As shown in fig. 5, the machine learning model training method 500 may include:
in some embodiments, the machine learning model may include a first machine learning model and a second machine learning model. In some embodiments, different training samples can be selected according to actual needs, and the initial model is trained to obtain different machine learning models.
Step 510, training samples are obtained. In particular, this step 510 may be performed by the model training module 240.
In some embodiments, the training samples may include a number of historical target images, historical target object information. The historical target image is training data, and the corresponding historical target object information is a training label (label). In some embodiments, the historical target object information may include historical target object location, historical target object category, and/or historical confidence. For the historical target image, the historical target object position, the historical target object type, and/or the historical confidence degree, which are similar to the target image, the target object position, the target object type, and/or the confidence degree, respectively, specific contents may refer to fig. 3 and the related description thereof, and are not repeated herein. In some embodiments, the model training module 240 may train the initial model to obtain the first machine learning model by taking the historical target images as training data and the corresponding historical target object information as training labels (labels). In some embodiments, the first machine learning model may be used to determine target object information in step 320.
In some embodiments, the training sample may include a number of historical target images, historical second radiation information. The historical target image is training data, and the corresponding historical second radiation information is a training label (label). In some embodiments, the historical second radiation information may include historical bulb position, historical detector position, historical reference object position, and/or historical reference object body thickness. Regarding the historical target image, the historical bulb position, the historical detector position, the historical reference object position and/or the historical reference object thickness are respectively similar to the target image, the bulb position, the detector position, the reference object position and/or the reference object thickness, specific contents may refer to fig. 3 and the related description thereof, which are not repeated herein. In some embodiments, the model training module 240 may train the initial model to obtain the first machine learning model by taking the historical target images as training data and the corresponding historical second radiation information as training labels (labels). In some embodiments, the first machine learning model may be used to determine the second radiation information in step 330.
In some embodiments, the training sample may further include a number of historical target images, historical target object information, and historical second radiation information. The historical target image is training data, and the corresponding historical target object information and the corresponding historical second radiation information are training labels (labels). In some embodiments, the model training module 240 may train the initial model to obtain the first machine learning model by using the historical target images as training data and the corresponding historical target object information and the historical second radiation information as training labels (labels). In some embodiments, the first machine learning model may be used to determine target object information and second radiation information in steps 320 and 330.
In some embodiments, the training samples may include a certain amount of historical radiation information, historical radiation region information. The historical radiation information is training data, and the corresponding historical radiation area information is a training label (label). In some embodiments, the historical radiation information may include historical bulb positions, historical detector positions, historical reference object body thicknesses, and/or historical radiation parameters. In some embodiments, the historical radiation region information may include historical radiation distances with a historical reference object as a base point. For the historical position of the bulb, the historical position of the detector, the historical position of the reference object, the historical thickness of the reference object and/or the historical radiation parameters, and the historical radiation distance, the historical position of the bulb, the historical position of the detector, the historical position of the reference object, the historical thickness of the reference object and/or the historical radiation parameters are similar to the historical radiation distance, respectively, reference may be made to fig. 3 and the related description thereof, which are not repeated herein. In some embodiments, the model training module 240 may train the initial model to obtain a second machine learning model by taking the historical radiation information as training data and the corresponding historical radiation area information as a training label (label). In some embodiments, the second machine learning model may be used to determine the radiation region information in step 330.
In some embodiments, step 510 may also include preprocessing the acquired training samples to conform to the requirements of the training. The preprocessing may include format conversion, normalization, identification, and the like.
In some embodiments, identification of the training samples may be performed by a human or computer program.
In some embodiments, model training module 240 may access information and/or data stored in storage device 150 via network 120 to obtain training samples. In some embodiments, the model training module 240 may obtain training samples through an interface. In some embodiments, the model training module 240 may also obtain the training samples in other manners, which is not limited in this specification.
And 520, training the initial model based on the training samples and the marking results to obtain a machine learning model. In particular, this step 520 may be performed by the model training module 240.
In some embodiments, the initial model may include a deep learning model that may include a YoloV3 deep learning model, a deep belief network model, a VGG convolutional neural network, an OverFeat model, an R-CNN model, an SPP-Net model, a Fast R-CNN model, a Fast RCNN model, an R-FCN model, a DSOD model, and so forth. In some embodiments, the initial model may include a Convolutional Recurrent Neural Network (CRNN), a Convolutional Neural Network (CNN), a Deep Convolutional Neural Network (DCNN), a Recurrent Neural Network (RNN), or a Long Short Term Memory (LSTM) model, among others.
In some embodiments, the training of the initial model may include: 1) the sample data is divided into a training set, a verification set and a test set. The data may be randomly partitioned by a certain ratio, for example, the ratio may be 85% of the training set, 10% of the validation set, and 5% of the test set. 2) And inputting sample data in the training set into an initial model to be trained for training, wherein when the training meets a certain condition, for example, the training frequency reaches a preset value, or the value of the loss function is less than a preset value, the model training process can be stopped, and a trained machine learning model is obtained. 3) And inputting the sample data in the verification set into the trained machine learning model for calculation to obtain an output result. 4) Comparing the output result of the sample data in 3) with the identifier (for example, historical target object information, historical radiation information or historical radiation area information) of the corresponding sample data to obtain a comparison result. In some embodiments, the comparison result may include a match or no match of the output result with the tag identification. The matching may mean that the output result is within 2% of the tag identification gap, otherwise, the output result is regarded as not matching. And (5) if the comparison result meets the verification requirement (the comparison result can be set according to actual requirements, for example, the output result obtained by the trained model of more than 95% of sample data in the verification set can be set to be matched with the corresponding label identification), and testing. Otherwise, the model is deemed to be not validated (e.g., the prediction accuracy is low). The parameters of the trained model may be adjusted and step 2) is performed again based on the adjusted model. 5) And inputting the sample data in the test set into the trained model for calculation to obtain an output result. 6) Comparing the output result of the sample data in the test set in the step 5) with the identifier of the corresponding sample data, and judging whether the training result meets the requirement (the training result can be set automatically according to the actual requirement, if the output result obtained by the model after more than 98% of the sample data in the test set is trained is matched with the corresponding identifier, the training result is considered to meet the requirement, and the training result is denied that the training result does not meet the requirement). And if the training result does not meet the requirement, re-preparing the sample data or re-dividing the training set, the verification set and the test set, and continuing training until the model test is passed.
Various changes may be made to the above-described steps and implementation, such as dividing the training set, validation set, and test set by other methods or proportions, omitting some of the steps, adding other steps, etc.
In some embodiments, model training module 240 may access information and/or data stored in storage device 150 via network 120 to train an initial model based on training samples and labeling results to obtain the machine learning model.
It should be noted that the above description related to the flow 500 is only for illustration and explanation, and does not limit the applicable scope of the present application. Various modifications and changes to flow 500 may occur to those skilled in the art upon review of the present application. However, such modifications and variations are intended to be within the scope of the present application. For example, step 520 of the process 500 may be further subdivided into steps 520 model training, 530 model validation, 540 model testing, and so on. For another example, the partition ratio may be 90% of the training set, 7% of the validation set, and 3% of the test set.
The beneficial effects that may be brought by the embodiments of the present description include, but are not limited to: (1) when the radioactive medical equipment is used for medical image examination or surgical navigation, whether a target object is close to or located in a radiation area is determined according to a target image acquired in real time, so that the target object (such as medical staff) is reminded of being far away from the radiation area in real time, the radiation exposure is reduced, and the potential radiation risk is avoided; (2) by making a reference radiation information table in advance or learning a model through a second machine, the radiation area information can be determined more accurately, and then whether the target object is close to or located in the radiation area is determined, so that the operation process is simple, convenient and fast, and the accuracy is high.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements and adaptations to the present description may occur to those skilled in the art, although not explicitly described herein. Such modifications, improvements and adaptations are proposed in the present specification and thus fall within the spirit and scope of the exemplary embodiments of the present specification.
Also, the description uses specific words to describe embodiments of the description. Reference throughout this specification to "one embodiment," "an embodiment," and/or "some embodiments" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "an embodiment" or "one embodiment" or "an alternative embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Additionally, the order in which the elements and sequences of the process are recited in the specification, the use of alphanumeric characters, or other designations, is not intended to limit the order in which the processes and methods of the specification occur, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented by hardware devices, they may also be implemented by software-only solutions, such as installing the described system on an existing server or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
Numerals describing the number of components, attributes, etc. are used in some embodiments, it being understood that such numerals used in the description of the embodiments are modified in some instances by the use of the modifier "about", "approximately" or "substantially". Unless otherwise indicated, "about", "approximately" or "substantially" indicates that the number allows a variation of ± 20%. Accordingly, in some embodiments, the numerical parameters used in the specification and claims are approximations that may vary depending upon the desired properties of the individual embodiments. In some embodiments, the numerical parameter should take into account the specified significant digits and employ a general digit preserving approach. Notwithstanding that the numerical ranges and parameters setting forth the broad scope of the range are approximations, in the specific examples, such numerical values are set forth as precisely as possible within the scope of the application.
For each patent, patent application publication, and other material, such as articles, books, specifications, publications, documents, etc., cited in this specification, the entire contents of each are hereby incorporated by reference into this specification. Except where the application history document does not conform to or conflict with the contents of the present specification, it is to be understood that the application history document, as used herein in the present specification or appended claims, is intended to define the broadest scope of the present specification (whether presently or later in the specification) rather than the broadest scope of the present specification. It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.

Claims (11)

1. A method for determining the region where a target object is located is characterized by comprising the following steps:
acquiring a target image, wherein the target image comprises at least one target object and radioactive medical equipment;
determining target object information based on the target image; wherein the target object information includes a target object location;
determining radiation region information of the radioactive medical device based on the target image;
determining whether a target object is close to or located in a radiation area based on the target object information and the radiation area information; wherein the content of the first and second substances,
the determining of radiation region information of the radioactive medical device and/or the determining of target object information is at least partially achieved by machine learning.
2. The method of claim 1, further comprising:
the prompt is made in response to the target object being proximate to or located within the irradiation zone.
3. The method of claim 1, wherein the target image comprises a two-dimensional image or a three-dimensional image.
4. The method of claim 1, wherein determining target object information based on the target image comprises:
processing the target image by utilizing a first machine learning model to determine the target object information; the target object information further includes a target object class, a target object location, and/or a confidence level indicating a probability that the target object is present at the target object location.
5. The method of claim 1, wherein the determining radiation region information based on the target image comprises:
acquiring radiation parameters as first radiation information;
processing the target image by using a first machine learning model to determine second radiation information; wherein the second radiation information at least comprises a bulb position, a detector position, a reference object position and/or a reference object body thickness; wherein the first radiation information and the second radiation information are used as current radiation information;
determining the radiation area information based on the current radiation information.
6. The method of claim 5, wherein the determining the radiation region information based on the current radiation information comprises:
acquiring a reference radiation information table, wherein the reference radiation information table reflects the mapping relation between the current radiation information and the radiation distance with a reference object as a base point;
determining the radiation area information based on the current radiation information and the reference radiation information table; the irradiation region information includes irradiation distances with reference objects as base points.
7. The method of claim 6, wherein determining whether a target object is near or within a radiation zone based on the target object information and the radiation zone information comprises:
determining a separation distance between the target object and the reference object based on the target object position and the reference object position;
when the spacing distance is smaller than the radiation distance, determining that the target object is located in a radiation area; or when the spacing distance is larger than the radiation distance and the difference value between the spacing distance and the radiation distance is smaller than a distance threshold value, determining that the target object is close to the radiation area.
8. The method of claim 5, wherein the determining the radiation region information based on the current radiation information comprises:
and processing the current radiation information by using a second machine learning model to determine radiation area information.
9. A system for determining a region in which a target object is located, the system comprising:
an acquisition module for acquiring a target image, the target image including at least one target object and a radioactive medical device;
determination module for
Determining target object information based on the target image; wherein the target object information includes a target object location;
determining radiation region information of the radioactive medical device based on the target image;
determining whether a target object is close to or located in a radiation area based on the target object information and the radiation area information; wherein the content of the first and second substances,
the determining of radiation region information of the radioactive medical device and/or the determining of target object information is at least partially achieved by machine learning.
10. An apparatus for determining a region in which a target object is located, the apparatus comprising at least one processor and at least one memory device, the memory device being configured to store instructions that, when executed by the at least one processor, implement the method of any one of claims 1 to 8.
11. A computer-readable storage medium storing computer instructions, wherein when the computer instructions in the storage medium are read by a computer, the computer performs the method of any one of claims 1 to 8.
CN202111649411.XA 2021-12-29 2021-12-29 Method and system for determining region of target object Pending CN114299547A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202111649411.XA CN114299547A (en) 2021-12-29 2021-12-29 Method and system for determining region of target object
PCT/CN2022/143012 WO2023125720A1 (en) 2021-12-29 2022-12-28 Systems and methods for medical imaging
EP22914970.3A EP4330935A1 (en) 2021-12-29 2022-12-28 Systems and methods for medical imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111649411.XA CN114299547A (en) 2021-12-29 2021-12-29 Method and system for determining region of target object

Publications (1)

Publication Number Publication Date
CN114299547A true CN114299547A (en) 2022-04-08

Family

ID=80973044

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111649411.XA Pending CN114299547A (en) 2021-12-29 2021-12-29 Method and system for determining region of target object

Country Status (1)

Country Link
CN (1) CN114299547A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115054829A (en) * 2022-08-19 2022-09-16 江苏容正医药科技有限公司 Intelligent plasma brush system, implementation method and device thereof, and storage medium
WO2023125720A1 (en) * 2021-12-29 2023-07-06 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for medical imaging

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023125720A1 (en) * 2021-12-29 2023-07-06 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for medical imaging
CN115054829A (en) * 2022-08-19 2022-09-16 江苏容正医药科技有限公司 Intelligent plasma brush system, implementation method and device thereof, and storage medium
CN115054829B (en) * 2022-08-19 2022-12-02 江苏容正医药科技有限公司 Intelligent plasma brush system, implementation method and device thereof, and storage medium

Similar Documents

Publication Publication Date Title
CN109035187B (en) Medical image labeling method and device
CN109937012B (en) Selecting acquisition parameters for an imaging system
US9684961B2 (en) Scan region determining apparatus
US8311296B2 (en) Voting in mammography processing
KR102560911B1 (en) Image processing apparatus, image processing method, and storage medium
US11715203B2 (en) Image processing method and apparatus, server, and storage medium
CN114299547A (en) Method and system for determining region of target object
US20120053446A1 (en) Voting in image processing
CN111488872B (en) Image detection method, image detection device, computer equipment and storage medium
US11676305B2 (en) Systems and methods for automated calibration
KR102422871B1 (en) Systems and methods for digital radiography
EP3203914A1 (en) Radiation dose applied to different anatomical stages
CN111870268A (en) Method and system for determining target position information of beam limiting device
CN113116365A (en) Image acquisition method, device and system and storage medium
CN109087357B (en) Scanning positioning method and device, computer equipment and computer readable storage medium
US20220054862A1 (en) Medical image processing device, storage medium, medical device, and treatment system
CN111144449A (en) Image processing method, image processing device, storage medium and electronic equipment
CN112200780B (en) Bone tissue positioning method, device, computer equipment and storage medium
CN113538372A (en) Three-dimensional target detection method and device, computer equipment and storage medium
CN113284160A (en) Method, device and equipment for identifying operation navigation mark bead body
KR20200116842A (en) Neural network training method for utilizing differences between a plurality of images, and method thereof
US20240062367A1 (en) Detecting abnormalities in an x-ray image
CN114067994A (en) Target part orientation marking method and system
EP4169450A1 (en) Method and system for determining parameter related to medical operation
WO2023023956A1 (en) Method and apparatus for visualization of touch panel to object distance in x-ray imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination