CN117653156A - Exposure feedback area display method and device, medical imaging equipment and storage medium - Google Patents
Exposure feedback area display method and device, medical imaging equipment and storage medium Download PDFInfo
- Publication number
- CN117653156A CN117653156A CN202311694778.2A CN202311694778A CN117653156A CN 117653156 A CN117653156 A CN 117653156A CN 202311694778 A CN202311694778 A CN 202311694778A CN 117653156 A CN117653156 A CN 117653156A
- Authority
- CN
- China
- Prior art keywords
- exposure
- distance
- position data
- detector
- region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 238000002059 diagnostic imaging Methods 0.000 title claims abstract description 10
- 230000009466 transformation Effects 0.000 claims description 82
- 230000006870 function Effects 0.000 claims description 53
- 238000003384 imaging method Methods 0.000 claims description 26
- 230000005855 radiation Effects 0.000 claims description 20
- 238000012937 correction Methods 0.000 claims description 17
- 230000000694 effects Effects 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 12
- 239000011159 matrix material Substances 0.000 claims description 11
- 238000004590 computer program Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 abstract description 8
- 239000000523 sample Substances 0.000 description 37
- 230000011218 segmentation Effects 0.000 description 25
- 238000006243 chemical reaction Methods 0.000 description 17
- 238000012549 training Methods 0.000 description 17
- 238000001514 detection method Methods 0.000 description 14
- 238000005286 illumination Methods 0.000 description 9
- 238000004422 calculation algorithm Methods 0.000 description 7
- 230000003993 interaction Effects 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 5
- 210000000056 organ Anatomy 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000002601 radiography Methods 0.000 description 3
- 238000011426 transformation method Methods 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 240000004282 Grewia occidentalis Species 0.000 description 2
- 210000001015 abdomen Anatomy 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000007650 screen-printing Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 210000004072 lung Anatomy 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000000059 patterning Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/54—Control of apparatus or devices for radiation diagnosis
- A61B6/542—Control of apparatus or devices for radiation diagnosis involving control of exposure
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B6/00—Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
- A61B6/46—Arrangements for interfacing with the operator or the patient
- A61B6/461—Displaying means of special interest
Landscapes
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Pathology (AREA)
- Heart & Thoracic Surgery (AREA)
- High Energy & Nuclear Physics (AREA)
- Physics & Mathematics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Optics & Photonics (AREA)
- Veterinary Medicine (AREA)
- Radiology & Medical Imaging (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Public Health (AREA)
- Human Computer Interaction (AREA)
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention discloses an exposure feedback area display method, an exposure feedback area display device, medical imaging equipment and a storage medium. When the exposure scene image with the target exposure object is displayed, since the region of interest of the target exposure object corresponds to the exposure feedback area on the detector, an exposure feedback mark can be displayed in the exposure scene image for indicating the projection condition of the exposure feedback area on the plane where the body surface of the target exposure object is located, and whether to stop exposure is judged based on the feedback signal in the exposure feedback area. On the one hand, the method realizes the determination process of introducing the human body characteristics of the target exposure object into the exposure feedback area and adapts to the body type difference of the patient, so that the exposure is more accurate, the positioning difficulty is reduced, and the usability of the equipment is improved. On the other hand, after the exposure feedback area is determined, the user can be better guided by displaying the exposure feedback identification in the exposure scene image.
Description
Technical Field
The present invention relates to the field of automatic exposure control technology, and in particular, to a method and apparatus for displaying an exposure feedback area, a medical imaging device, and a storage medium.
Background
With the development of technology, in pursuit of higher quality images produced using smaller radiation doses, more and more X-ray medical diagnostic apparatuses are beginning to employ automatic exposure control (AEC, automatic Exposure Control) technology. In the related art, according to the relative position of a target exposure object and an ionization chamber field, the position of a region of interest is estimated, and a corresponding ionization chamber field is selected to determine an exposure feedback region so as to perform automatic exposure control.
However, the related art requires an experience of an operator to determine the exposure feedback area. It is therefore desirable to propose a new method of determining exposure feedback areas.
Disclosure of Invention
The embodiments of the present specification aim to solve at least one of the technical problems in the related art to some extent. For this reason, the present embodiments provide an exposure feedback area display method, an apparatus, a medical imaging device, and a storage medium.
The embodiment of the specification provides an exposure feedback area display method, which comprises the following steps:
displaying an exposure scene image; the exposure scene image is provided with a target exposure object, an interested region of the target exposure object is correspondingly provided with an exposure feedback region on the detector, and a feedback signal in the exposure feedback region is used for judging whether exposure is stopped or not;
Displaying an exposure feedback identifier in the exposure scene image; the exposure feedback mark is used for indicating the projection condition of the exposure feedback area on a target plane, and the target plane is the plane where the body surface of the target exposure object is located.
In one embodiment, the method further comprises:
displaying at least one of a projection identifier, an interesting identifier and an irradiation field identifier of the imaging range of the detector in the exposure scene image; wherein the detector imaging range projection identifier is used for indicating the projection condition of the detector on the target plane; the interesting mark is used for indicating the distribution condition of the interesting area; the irradiation field identifier is used for indicating the irradiation field range determined by the beam limiter.
In one embodiment, the exposure feedback area is determined by:
acquiring first position data corresponding to the region of interest, determining second position data corresponding to projection of the detector on the target plane, and determining an exposure feedback area corresponding to the region of interest on the detector according to the first position data and the second position data; or alternatively
Responsive to a selected operation on a region of interest, a corresponding exposure feedback area of the region of interest on the detector is determined.
In one embodiment, a first distance is reserved between the body surface of the target exposure object and the ray source, and the detector is preset with a specified characteristic point, wherein the specified characteristic point corresponds to a distance transformation function; the determining second position data corresponding to the projection of the detector on the target plane comprises:
substituting the first distance into the distance transformation function to obtain projection position data of the detector on the target plane;
and performing geometric transformation based on the projection position data to obtain the second position data.
In one embodiment, the distance transformation function is determined by:
acquiring a plurality of scene sample images; the scene sample image is obtained by performing view angle transformation on a scene image obtained by shooting;
determining third position data of the specified feature point in the scene sample image;
and fitting processing is carried out based on the third position data and the source image SID distance corresponding to the scene sample image, so as to obtain the distance transformation function.
In one embodiment, a first distance is arranged between the body surface of the target exposure object and a ray source, and a second distance is arranged between the ray source and a detector; the determining, according to the first position data and the second position data, an exposure feedback area corresponding to the region of interest on the detector includes:
and determining an exposure feedback area corresponding to the region of interest on the detector according to the first distance, the second distance, the first position data and the second position data.
In one embodiment, the determining the exposure feedback area corresponding to the region of interest on the detector according to the first distance, the second distance, the first position data and the second position data includes:
performing cone beam effect correction on the first position data according to the second position data, and determining corrected position data of the region of interest;
and carrying out coordinate transformation on the corrected position data according to the first distance and the second distance to obtain an exposure feedback area corresponding to the region of interest on the detector.
In one embodiment, the acquiring the first position data of the region of interest includes:
Shooting a current exposure scene to obtain an initial scene image;
performing view transformation on the initial scene image through a transformation matrix to obtain a corrected scene image serving as the exposure scene image;
and dividing the region of interest based on the exposure scene image to obtain the first position data.
The embodiment of the present specification provides an exposure feedback area display device, including:
the scene image display module is used for displaying an exposure scene image; the exposure scene image is provided with a target exposure object, an interested region of the target exposure object is correspondingly provided with an exposure feedback region on the detector, and a feedback signal in the exposure feedback region is used for judging whether exposure is stopped or not;
the feedback mark display module is used for displaying an exposure feedback mark in the exposure scene image; the exposure feedback mark is used for indicating the projection condition of the exposure feedback area on a target plane, and the target plane is the plane where the body surface of the target exposure object is located.
The present specification embodiment provides a medical imaging apparatus including: a memory, and one or more processors communicatively coupled to the memory; the memory has stored therein instructions executable by the one or more processors to cause the one or more processors to implement the steps of the method of any of the embodiments described above.
The present description provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the method according to any of the above embodiments.
The present description provides a computer program product comprising instructions which, when executed by a processor of a computer device, enable the computer device to perform the steps of the method of any one of the embodiments described above.
In the above-described embodiments, when displaying the exposure scene image with the target exposure object, since the region of interest of the target exposure object corresponds to the exposure feedback area on the detector, the exposure feedback mark may be displayed in the exposure scene image to indicate the projection condition of the exposure feedback area on the plane where the body surface of the target exposure object is located, and whether to stop exposure may be determined based on the feedback signal in the exposure feedback area. On the one hand, the method realizes the determination process of introducing the human body characteristics of the target exposure object into the exposure feedback area and adapts to the body type difference of the patient, so that the exposure is more accurate, the positioning difficulty is reduced, and the usability of the equipment is improved. On the other hand, after the exposure feedback area is determined, the user can be better guided by displaying the exposure feedback identification in the exposure scene image.
Drawings
Fig. 1a is a schematic flow chart of an exposure feedback area display method according to an embodiment of the present disclosure;
FIG. 1b is a schematic diagram of a conventional ionization chamber provided in an embodiment of the present disclosure;
FIG. 1c is a schematic diagram of a detector exposure feedback unit according to an embodiment of the present disclosure;
FIG. 2a is a schematic diagram showing a projection identifier, an interesting identifier and an irradiation field identifier of an imaging range of a detector in an exposure scene image according to an embodiment of the present disclosure;
FIG. 2b is a block diagram of an automatic exposure control system according to an embodiment of the present disclosure;
FIG. 3 is a schematic flow chart of obtaining second position data according to an embodiment of the present disclosure;
FIG. 4 is a schematic flow chart of obtaining a distance transformation function according to an embodiment of the present disclosure;
FIG. 5 is a schematic flow chart of obtaining an exposure feedback area corresponding to a region of interest on a detector according to an embodiment of the present disclosure;
FIG. 6a is a schematic flow chart of obtaining first position data according to an embodiment of the present disclosure;
FIG. 6b is an image taken when no object is placed on the detector surface, provided in an embodiment of the present disclosure;
FIG. 6c is an image taken of an object according to an embodiment of the present disclosure when the object is placed on the surface of the detector;
Fig. 7 is a flowchart of an exposure feedback area display method according to an embodiment of the present disclosure;
fig. 8 is a schematic diagram of an exposure feedback area display device according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
The purpose of the automatic exposure control technique (AEC) is to obtain a stable, reliable and high-quality image. In the related art, the automatic exposure control technique can achieve a stable high-quality image according to the preset irradiation dose reached by the ionization chamber feedback signal. In the automatic exposure process, the ionization chamber continuously accumulates X-ray signals, and then converts the X-ray signals into electric signals and feeds the electric signals back to the automatic exposure control module. When the automatic exposure control module detects that the electric signal exceeds a preset condition, the X-ray is cut off. The principle of the method based on detector feedback is similar to that of an ionization chamber, a physical ionization chamber is not needed to be used in the method based on detector feedback, the pixel value of the detector is used as a feedback signal, and when the automatic exposure control module detects that the feedback signal reaches a target value, exposure is stopped.
In the related art, three-field or five-field ionization chambers may be used in a cassette of detectors. The silk screen printing can be drawn on the surface of the box shell of the detector, and the position of the silk screen printing is consistent with the position of an ionization chamber arranged in the box. Before image acquisition, an operator predicts the position of a region of interest according to the relative position of a target exposure object and an ionization chamber field, then selects a corresponding ionization chamber field on software or an operation terminal, determines an exposure feedback region, and then exposes.
However, in the related art, the operator empirically estimates the exposure feedback area due to the position where the field is not seen due to the occlusion of the target exposure object. Then, the positioning adjustment is carried out, so that the positions of the exposure feedback area and the exposure interested area are consistent, and accurate exposure can be ensured. For example, in the case of radiography such as chest radiography or lumbar vertebra, when the exposure feedback area is inappropriate, a relatively large deviation in image quality occurs.
In addition, the applicability is limited because the signal feedback areas of the ionization chamber are all physical positions designed in advance. For example, for children chest photography with small hands, feet, etc., the field coverage area is too large, which results in that the signal feedback area of the selected ionization chamber contains an air area outside human tissues, so that the exposure is cut off in advance, and the exposure is low.
Based on this, the present embodiment provides an exposure feedback area display method. When the exposure scene image with the target exposure object is displayed, since the region of interest of the target exposure object corresponds to the exposure feedback area on the detector, an exposure feedback mark can be displayed in the exposure scene image to indicate the projection condition of the exposure feedback area on the plane where the body surface of the target exposure object is located, and whether to stop exposure is judged based on the feedback signal in the exposure feedback area. On the one hand, the method realizes the determination process of introducing the human body characteristics of the target exposure object into the exposure feedback area and adapts to the body type difference of the patient, so that the exposure is more accurate, the positioning difficulty is reduced, and the usability of the equipment is improved. On the other hand, after the exposure feedback area is determined, the exposure feedback mark can be displayed in the exposure scene image, so that the user can be guided better.
Referring to fig. 1a, the method for displaying an exposure feedback area according to the embodiment of the present disclosure may include the following steps:
s110, displaying an exposure scene image.
The exposure scene image is provided with a target exposure object, an interested region of the target exposure object is correspondingly provided with an exposure feedback region on the detector, and a feedback signal in the exposure feedback region is used for judging whether exposure is stopped or not. An exposure scene image may refer to an image captured by an image acquisition device or sensor in photographic, imaging or vision applications, which contains information of the object to be exposed and the brightness distribution of the surrounding environment. These image acquisition devices can be used to analyze the exposure of a target object or scene, as well as the performance under different lighting conditions. The target exposure object may be considered a primary target object or region of focus or object of interest in the exposure scene image. The region of interest may be a region of particular importance or focus in the image. The region of interest may be the main subject in the image or a portion where special attention and optimization of exposure is required. The exposure feedback area may refer to a specific area set on the detector for monitoring the radiation exposure intensity. The feedback signal may be generated based on data such as radiation exposure intensity within the exposure feedback area.
Specifically, when the region of interest of the target exposure object enters the region range which can be detected by the medical imaging device, an exposure scene image is obtained by shooting through the image acquisition device and is displayed on the interface. The exposure scene image obtained by shooting the region of interest by the image acquisition device can also be stored in the computer device. From which the desired exposure scene image is read and displayed on the interface, if desired. In still other embodiments, after the scene image is captured by the image capturing device, the region of interest may be subjected to a perspective transformation, resulting in a modified exposed scene image and displayed on the interface.
For example, referring to fig. 1b, fig. 1b is a layout of a conventional ionization chamber, which is divided into three exposure feedback areas. During operation, one exposure feedback area can be selected independently, and a plurality of exposure feedback areas can be selected to generate a plurality of combination modes.
Referring to FIG. 1c, a gridding process, such as 1cmx 1cm, may be employed as each of the cells at a fixed pitch. The feedback units can be selected individually or by combining check, so that the reading efficiency can be ensured, and enough granularity can be ensured so as to better match different shooting targets. The exposure feedback area can be determined more flexibly through the gridding-processed detector, and each pixel point in the imaging range of the detector can be used as a feedback sensor.
S120, displaying an exposure feedback mark in the exposure scene image.
The exposure feedback mark is used for indicating the projection condition of the exposure feedback area on a target plane, and the target plane is the plane where the body surface of the target exposure object is located.
Specifically, the purpose of displaying the exposure feedback area is to monitor the exposure condition in real time, guide the exposure parameter adjustment, assist the image analysis and processing, and improve the working efficiency, thereby obtaining better image quality and meeting the specific application requirements. The exposure feedback identification can be displayed on a software interface or an operation terminal by means of overlapping layers or transparent covers. The method can enable a user to clearly see the position and the range of the exposure feedback area when previewing or editing the image in real time, thereby better performing exposure control and adjustment.
In some embodiments, the region of interest recognition segmentation model is used for segmenting the region of interest of the exposure scene image, so that first position data corresponding to the region of interest can be directly obtained. Or the region of interest segmentation model can be used for segmenting the human body part of the exposure scene image, and then the region of interest in the exposure scene image is determined in the segmentation result by combining with APR projection experience and through a human-computer interaction mode, and the corresponding first position data of the region of interest is obtained. Finally, the detection frame corresponding to the region of interest and the exposure scene image may be subjected to layer overlapping, so that the detection frame corresponding to the region of interest is displayed in the exposure scene image. In this way, the detection frame corresponding to the region of interest can be regarded as the exposure feedback identification.
In other embodiments, the detector records positional information of a plurality of human organs in their coordinate system as the X-rays pass through the human body and are received by the detector. These positional information are then transformed into the camera coordinate system and geometrically transformed to project the body organ onto the target plane, resulting in a plurality of projection areas. Next, a region of interest may be selected from the plurality of projection regions by way of human-computer interaction. The selected region of interest may be considered an exposure feedback identification.
In the above embodiment, when displaying the exposure scene image with the target exposure object, since the region of interest of the target exposure object corresponds to the exposure feedback area on the detector, the exposure feedback mark may be displayed in the exposure scene image to indicate the projection condition of the exposure feedback area on the plane where the body surface of the target exposure object is located, and whether to stop exposure may be determined based on the feedback signal in the exposure feedback area. On the one hand, the method realizes the determination process of introducing the human body characteristics of the target exposure object into the exposure feedback area and adapts to the body type difference of the patient, so that the exposure is more accurate, the positioning difficulty is reduced, and the usability of the equipment is improved. On the other hand, after the exposure feedback area is determined, the user can be better guided by displaying the exposure feedback identification in the exposure scene image.
In some embodiments, the method may further comprise: at least one of a detector imaging range projection signature, an interest signature, and an illumination field signature is displayed in the exposure scene image.
Wherein the detector imaging range projection identity is used to indicate the projection of the detector onto the object plane. The interest identifier is used to indicate the distribution of the region of interest. The illumination field identifier is used for indicating the illumination field range determined by the beam limiter.
Specifically, when the first position data corresponding to the region of interest under the camera coordinate system is known, the projection identifier of the imaging range of the detector can be compared with the region of interest, so as to determine whether the region of interest appears in the effective imaging range of the detector on the plane where the body surface of the target exposure object is located. The detector imaging range projection identity can thus be displayed in the exposure scene image.
The purpose of displaying the identification of interest is to help the user to accurately locate the region of interest in the image for better exposure. While displaying the identity of interest, the user may observe whether the identity of interest meets the desired selected location, and if not, may manually reselect the region of interest to redefine the identity of interest. It is therefore necessary to display the identification of interest in the exposed scene image.
To better guide the user, displaying the illumination field identification during patterning can help the user to better arrange the target exposure object in the proper position of the light. Through displaying the irradiation field mark, a user can clearly know the irradiation range of rays, so that the exposure is reasonably adjusted before shooting, the interested mark is prevented from being selected to a non-X-ray irradiation area, proper light irradiation of a shot object or a scene is ensured, and the occurrence of underexposure or overexposure is avoided.
In some embodiments, the software interface or the operation terminal can display the image layer of the projection mark, the interested mark or the irradiation field mark of the imaging range of the detector in a superposition manner, and mark the image layer with different colors, shapes or virtual and real lines, so that a user can clearly see the information.
Illustratively, referring to FIG. 2a, the detector imaging range projection signature includes a rectangular box 204 of detector center point locations 202, detector four corner markers, and boundaries in the up, down, left, and right directions. The identification of interest may be a rectangular box 206 in fig. 2. The illumination field identification may be a rectangular box 208 in fig. 2.
In the embodiment, at least one of the projection identifier of the imaging range of the detector, the identifier of interest and the identifier of the irradiation field is displayed in the exposure scene image, so that a user can be helped to better understand the position relationship among the projection identifier of the imaging range of the detector, the identifier of interest, the identifier of the irradiation field and the exposure feedback identifier, and the shooting or exposure range can be better controlled.
In some embodiments, the exposure feedback area is determined by: and acquiring first position data corresponding to the region of interest, determining second position data corresponding to the projection of the detector on the target plane, and determining a corresponding exposure feedback area of the region of interest on the detector according to the first position data and the second position data.
Specifically, the current exposure scene is shot through an image acquisition device, and an exposure scene image is obtained. And identifying the exposure scene image to obtain the position data of the region of interest in the exposure scene image, wherein the position data is used as the first position data corresponding to the region of interest.
And determining the plane of the body surface of the target exposure object as a target plane, and projecting the detector onto the target plane to obtain a detection projection area corresponding to the detector on the target plane. For example, feature points are determined on edges of the detector, the feature points on the edges are projected onto the target plane, and a corresponding detection projection area of the detector on the target plane is obtained based on the projection of the feature points on the edges. Since the first position data is determined under the coordinate system of the image acquisition device and the position data of the detection projection area needs to be determined under the same coordinate system, the position data of the detection projection area under the coordinate system of the image acquisition device is determined and used as the second position data corresponding to the projection of the detector on the target plane.
And (3) performing proportional calculation on the first position data and the second position data by adopting a triangulation or other geometric calculation method, and then performing coordinate conversion to obtain the position and the size of the region of interest under the coordinate system of the detector. And determining a corresponding region of the region of interest on the detector based on the position and the size of the region of interest under the detector coordinate system, namely an exposure feedback region.
In some embodiments, the exposure feedback area is determined by: responsive to a selected operation on the region of interest, a corresponding exposure feedback area of the region of interest on the detector is determined.
Specifically, the region of interest is first determined according to specific requirements and application scenarios. The location information of the region of interest may then be determined in response to a selected operation on the region of interest. The position information of the region of interest is corrected by cone beam effect, and then coordinate conversion is carried out on the position information of the region of interest, so that the corresponding exposure feedback area of the region of interest on the detector can be determined.
In some embodiments, the detector records positional information of a plurality of human organs in their coordinate system as the X-rays pass through the human body and are received by the detector. These positional information are then transformed into the camera coordinate system and geometrically transformed to project the body organ onto the target plane, resulting in a plurality of projection areas. Next, a region of interest may be selected from the plurality of projection regions by way of human-computer interaction.
In other embodiments, the operator may adjust the exposure feedback area automatically identified by the system by way of human-machine interaction. If the operator considers that the automatic identification result of the current system is unsatisfactory, the currently selected exposure feedback unit can be added or cancelled through mouse clicking or touch screen. And then determining a plurality of projection areas on the target plane according to the position information of a plurality of human organs recorded by the detector under the coordinate system. Next, a region of interest may be selected from the plurality of projection regions by way of human-computer interaction.
In still other embodiments, the region of interest segmentation model may be used to segment the human body part of the exposure scene image, and then the region of interest in the exposure scene image is selected in the segmentation result in combination with the APR projection experience and by way of human-machine interaction.
It should be noted that referring to fig. 2b, the automatic exposure control system may be composed of a high voltage generator, an exposure control signal acquisition module (such as an ionization chamber, a flat panel detector), a control module, and a target interest setting module. And after the target interested region setting module determines an exposure feedback region of the interested region on the detector according to the exposure scene information, the high-voltage generator provides a required high-voltage signal so as to drive the detector to perform exposure. An exposure control signal acquisition module (e.g., ionization chamber, flat panel detector) can be used to acquire the exposure feedback signal on the exposure feedback area in real time and feed back the received exposure feedback signal to the control module in real time. The feedback signal read from the exposure feedback area can be used for more accurately judging whether the high voltage is cut off, so that the control module reads the exposure feedback signal obtained by the exposure control signal acquisition module in real time, and when the exposure feedback signal reaches the target condition, the high voltage is cut off to stop the exposure operation. The control module may be deployed inside the probe or may be a separate device. The target condition may be a set of parameters or thresholds set in advance, for determining whether the exposure feedback signal reaches a desired state. The target conditions may depend on the specific application and requirements. The target conditions may vary, for example, from examination site to examination site. This is because different sites have different tissue densities and contrasts, and different exposure doses may be required. In addition, the sizes of the exposure feedback areas corresponding to different parts are also different. Thus, the target conditions are adjusted according to the size, position and area of the detector feedback area to ensure that a proper exposure level is obtained throughout the region of interest. Therefore, it is necessary to flexibly set and adjust target conditions for different examination sites and different-sized detector feedback areas to ensure that the system can effectively perform exposure control in various situations and obtain high-quality imaging results.
In the above embodiment, the exposure feedback area corresponding to the region of interest on the detector is determined in various ways, so that the robustness of the system can be increased, and even if one way may fail under a specific condition, the other ways can still ensure the reliability of the exposure feedback area.
In some embodiments, referring to fig. 3, a first distance is provided between a body surface of a target exposure object and a radiation source, and a detector is preset with a specified feature point, wherein the specified feature point corresponds to a distance transformation function; determining second position data corresponding to the projection of the detector onto the object plane may comprise the steps of:
s310, substituting the first distance into a distance transformation function to obtain projection position data of the detector on the target plane.
S320, performing geometric transformation based on the projection position data to obtain second position data.
The projection position data may be projection position data of the detector on the target plane, which is obtained by converting the first distance by a distance conversion function. The geometric transformation may change the shape, size, orientation or position of the projection position data to generate new position data.
Specifically, a required distance transformation function is used, a first distance is substituted into the function, and projection position data of the detector on the target plane is calculated. Depending on the desired effect, a suitable geometrical transformation method, such as translation, is selected. And taking the calculated projection position data as input, and applying a selected geometric transformation method to obtain second position data.
In some embodiments, the first distance SOD between the body surface of the target exposure object and the radiation source, in combination with device position information, such as the second distance SID between the radiation source and the detector, may obtain thickness information of the photographed target. The first distance SOD between the body surface of the target exposure object and the radiation source and the serial number 1 of the upper left corner designated feature point can be substituted into the distance conversion function to obtain F1 (SOD), so that projection position data [ Xp1, yp1] of the detector designated feature point (upper left corner) with serial number 1 on the target plane can be obtained. The first distance SOD between the body surface of the target exposure object and the radiation source and the serial number 2 of the specified feature point in the lower left corner can be substituted into the distance conversion function to obtain F2 (SOD), so that projection position data [ Xp2, yp2] of the specified feature point of the detector with serial number 2 (lower left corner) on the target plane can be obtained. The first distance SOD between the body surface of the target exposure object and the radiation source and the serial number 3 of the upper right corner designated feature point can be substituted into the distance conversion function to obtain F3 (SOD), so that projection position data [ Xp3, yp3] of the detector designated feature point (upper right corner) with serial number 3 on the target plane can be obtained. The first distance SOD between the body surface of the target exposure object and the radiation source and the serial number 4 of the right-lower corner designated feature point can be substituted into the distance conversion function to obtain F4 (SOD), so that projection position data [ Xp4, yp4] of the detector designated feature point (right-lower corner) with serial number 4 on the target plane can be obtained. The first distance SOD between the body surface of the target exposure object and the radiation source and the serial number 5 of the center point specified feature point can be substituted into the distance conversion function to obtain F5 (SOD), so that projection position data [ Xpc, ypc ] of the detector specified feature point (center point) with the serial number 5 on the target plane can be obtained. And performing geometric transformation on the projection position data through the following formula, and correcting the influence of cone beam ray projection to obtain second position data.
Xi=SOD/SID*(Xpi-Xpc)+Xpc
Yi=SOD/SID*(Ypi-Ypc)+Ypc
Wherein SID is the second distance between the ray source and the detector, and SOD is the first distance between the body surface of the target exposure object and the ray source. Xpi is the abscissa of the detector's vertex at the camera's coordinate system at SOD distance, ypi is the ordinate Xpc of the detector's vertex at the camera's coordinate system at SOD distance, ypc is the abscissa of the detector's center point at the camera's coordinate system at SOD distance. Xj is the abscissa of the detector's vertex under the camera's coordinate system at SOD distance and considering the influence of the cone beam, yj is the ordinate of the detector's vertex under the camera's coordinate system at SOD distance and considering the influence of the cone beam. Wherein i represents a coordinate point index, and i is a positive integer ranging from 1 to 4, and is an upper left corner, a lower left corner, an upper right corner and a lower right corner in sequence.
In other embodiments, after the coordinates of the specified feature point are determined based on the transformation function, the coordinates corresponding to the other specified feature points may be determined based on the positional relationship between the other specified feature points and the specified feature point.
For example, the first distance SOD between the body surface of the target exposure object and the radiation source and the serial number 1 of the upper left corner designation feature point may be substituted into the distance conversion function to obtain F1 (SOD), so that projection position data [ Xp1, yp1] of the detector designation feature point (upper left corner) of serial number 1 on the target plane may be obtained. Based on the positional relationship between the lower left corner specification feature point and the upper left corner specification feature point, projection position data [ Xp2, yp2] of the lower left corner specification feature point on the target plane is determined. Based on the positional relationship between the upper right corner specification feature point and the upper left corner specification feature point, projection position data [ Xp3, yp3] of the upper right corner specification feature point on the target plane is determined. Based on the positional relationship between the lower right corner specification feature point and the upper left corner specification feature point, projection position data [ Xp4, yp4] of the lower right corner specification feature point on the target plane is determined. Based on the positional relationship between the center line point specified feature point and the upper left corner specified feature point, projection position data [ Xpc, ypc ] of the center line point specified feature point on the target plane is determined.
In the above embodiment, the first distance is substituted into the distance transformation function to obtain the projection position data of the detector on the target plane, and the geometric transformation is performed based on the projection position data to obtain the second position data, so that the position correction can be performed, and it is determined that the second position data corresponding to the projection of the detector on the target plane is more accurate.
In some embodiments, referring to fig. 4, the distance transform function is determined by:
s410, acquiring a plurality of scene sample images.
S420, determining third position data of the appointed feature point in the scene sample image.
And S430, fitting processing is carried out on the basis of the third position data and the source image distance corresponding to the scene sample image, and a distance transformation function is obtained.
The scene sample image is obtained by performing view angle transformation on a shot scene image, and the scene sample image can be an exposure scene image without a target exposure object. The source image distance may be a distance between the radiation source and the detector or a distance between the radiation source and a body surface of the target exposure object.
In particular, it is first necessary to acquire, in a specific scene, a plurality of images at different source image distances, which should cover all the specified feature points of interest and include corresponding distance information. And then, performing visual angle transformation on the plurality of images under different source image distances to obtain a plurality of scene sample images. The specified feature points may be automatically detected using feature point detection algorithms in computer vision (e.g., SIFT, SURF, etc.), or may be manually labeled. Third location data of the specified feature point in the scene sample image may be determined based on the location information of the specified feature point in the scene sample image. A distance transformation model is built using machine learning or deep learning methods. Regression models, neural networks, etc. may be selected for modeling. The collected sets of third location data are divided into training sets and test sets. The training set is used for training and parameter optimization of the model, while the test set is used for evaluating the performance and generalization ability of the model. The built distance transformation model is trained using a training set. According to the source image distance and the corresponding third position data corresponding to each scene sample image in the training set, parameters of the model are optimized through fitting processing, so that distance values can be accurately predicted, and a distance transformation function is obtained.
In some embodiments, the specified feature points may be the corners and center points of the detector in the scene sample image. The distance transformation function F1 (distance) can be determined by fitting the coordinates of the designated feature point at the upper left corner with the sequence number 1 at different source image distances and the corresponding source image distances. After determining the distance transform function of the upper left corner designated feature point, a transform function F2 (distance) corresponding to the lower left corner designated feature point may be determined according to the positional relationship between the lower left corner designated feature point with the sequence number of 2 and the upper left corner designated feature point. The transformation function F3 (distance) corresponding to the lower left corner specification feature point may be determined from the positional relationship between the upper right corner specification feature point and the upper left corner specification feature point with the sequence number 3. The transformation function F4 (distance) corresponding to the lower-left corner specification feature point can be determined from the positional relationship between the lower-right corner specification feature point and the upper-left corner specification feature point with the sequence number 4. The transformation function F5 (distance) corresponding to the center point specification feature point can be determined from the positional relationship between the center point specification feature point with the sequence number 5 and the upper left corner specification feature point. Wherein, N represents the sequence number of the appointed characteristic point, and distance represents the source image distance. The fitting method can adopt a polynomial fitting model or an exponential model.
For example, in the case where the source image distance is d1, coordinates (x 11, y 11) of the feature point designated in the upper left corner of the sequence number 1 may be determined. The coordinates (x 12, y 12) of the feature point designated in the upper left corner of the sequence number 1 can be determined in the case where the source image distance is d 2. The distance transform function F1 (distance) can be determined by fitting the coordinates of the designated feature point at the upper left corner with the sequence number 1 and the corresponding source image distance. The transformation function F2 (distance) corresponding to the bottom-left corner specification feature point of the number 2 can be determined from the positional relationship between the bottom-left corner specification feature point of the number 2 and the top-left corner specification feature point of the number 1. The transformation function F3 (distance) corresponding to the upper right corner specification feature point of the number 3 can be determined from the positional relationship between the upper right corner specification feature point of the number 3 and the upper left corner specification feature point of the number 1. The transformation function F4 (distance) corresponding to the bottom-right corner specification feature point of the number 4 can be determined from the positional relationship between the bottom-right corner specification feature point of the number 4 and the top-left corner specification feature point of the number 1. The transformation function F5 (distance) corresponding to the center point designated feature point of the number 5 can be determined from the positional relationship between the center point designated feature point of the number 5 and the upper left corner designated feature point of the number 1.
In other embodiments, the designated feature points may be the corners and center points of the detector in the scene sample image. The distance transformation function F1 (distance) can be determined by fitting the coordinates of the designated feature point at the upper left corner with the sequence number 1 at different source image distances and the corresponding source image distances. The distance transformation function F2 (distance) can be determined by fitting the coordinates of the specified feature point at the lower left corner with the sequence number of 2 at different source image distances and the corresponding source image distances. The distance transformation function F3 (distance) can be determined by fitting the coordinates of the designated feature point at the upper right corner with the sequence number 3 at different source image distances and the corresponding source image distances. The distance transformation function F4 (distance) can be determined by fitting the coordinates of the specified feature point at the bottom right corner with the sequence number 4 at different source image distances and the corresponding source image distances. The distance transformation function F5 (distance) can be determined by fitting the coordinates of the center designated feature point with sequence number 5 at different source image distances and the corresponding source image distances.
For example, in the case where the source image distance is d1, coordinates (x 11, y 11) of the feature point designated in the upper left corner of the sequence number 1 may be determined. The coordinates (x 12, y 12) of the feature point designated in the upper left corner of the sequence number 1 can be determined in the case where the source image distance is d 2. The distance transform function F1 (distance) can be determined by fitting the coordinates of the designated feature point at the upper left corner with the sequence number 1 and the corresponding source image distance.
The coordinates (x 21, y 21) of the feature point specified in the lower left corner of the sequence number 2 can be determined when the source video distance is d 1. Coordinates (x 22, y 22) of the feature point specified in the lower left corner of the sequence number 2 can be determined in the case where the source image distance is d 2. The distance transform function F2 (distance) can be determined by fitting the coordinates of the specified feature point at the bottom left corner with the sequence number 2 and the corresponding source image distance.
The coordinates (x 31, y 31) of the feature point specified in the upper right corner of the sequence number 3 can be determined in the case where the source image distance is d 1. Coordinates (x 32, y 32) of the feature point designated in the upper right corner of the sequence number 3 can be determined in the case where the source image distance is d 2. The distance transform function F3 (distance) can be determined by fitting the coordinates of the feature point specified in the upper right corner with the sequence number 3 and the corresponding source image distance.
The coordinates (x 41, y 41) of the feature point specified in the lower right corner of the sequence number 4 can be determined in the case where the source image distance is d 1. Coordinates (x 42, y 42) of the feature point specified in the lower right corner of the sequence number 4 can be determined in the case where the source image distance is d 2. The distance transform function F4 (distance) can be determined by fitting the coordinates of the feature point specified in the lower right corner with the sequence number 4 and the corresponding source image distance.
Coordinates (x 51, y 51) of the center point designation feature point of the number 5 can be determined in the case where the source image distance is d 1. Coordinates (x 52, y 52) of the center point designation feature point of the sequence number 5 can be determined in the case where the source image distance is d 2. The distance transform function F5 (distance) can be determined by fitting the coordinates of the center point designated feature point with the sequence number 5 and the corresponding source image distance.
If the point to be subjected to the coordinate transformation is not the above-described four-corner feature point and center point, taking an arbitrary point P located between the upper left corner feature point and center point as an example, the coordinate transformation can be performed by the following formula, and the transformed coordinate corresponding to the point P is determined:
Xs’=(Xs-Xc)/(X1-Xc)*(X1’-Xc’)+Xc’
Ys’=(Ys-Yc)/(Y1-Yc)*(Y1’-Yc’)+Yc’
where Xs is the abscissa of point P in the camera coordinate system at SID distance and Ys is the ordinate of point P in the camera coordinate system at SID distance. Xs 'is the abscissa of point P in the camera coordinate system at SOD distance and Ys' is the ordinate of point P in the camera coordinate system at SID distance. X1 is the abscissa of the upper left corner feature point under the SID distance under the camera coordinate system, and Y1 is the ordinate of the upper left corner feature point under the SID distance under the camera coordinate system. X1 'is the abscissa of the upper left corner feature point under SOD distance under camera coordinate system, and Y1' is the ordinate of the upper left corner feature point under SOD distance under camera coordinate system. Xc is the abscissa of the center point in the camera coordinate system at SID distance, yc is the ordinate of the center point in the camera coordinate system at SID distance. Xc 'is the abscissa of the center point in the camera coordinate system at SOD distance, yc' is the ordinate of the center point in the camera coordinate system at SOD distance.
And the same can determine the corresponding formulas for coordinate transformation of the coordinates in other areas.
In the above embodiment, a plurality of scene sample images are acquired, third position data of the specified feature points in the scene sample images are determined, fitting processing is performed based on the third position data and source image distances corresponding to the scene sample images, a distance transformation function is obtained, and a data basis is provided for the subsequent determination of the projection identification of the imaging range of the detector.
In some embodiments, the target exposes the body surface of the object to a first distance from the source, and the source to the detector to a second distance; determining an exposure feedback area corresponding to the region of interest on the detector based on the first position data and the second position data may include: and determining a corresponding exposure feedback area of the region of interest on the detector according to the first distance, the second distance, the first position data and the second position data.
Specifically, a geometric correction model may be established based on a geometric relationship between the first position data and the second position data. And then performing geometric correction (such as cone beam effect correction) on the first position data by using the second position data according to the geometric correction model, and determining corrected data of the first position data on the exposure scene image. And then, coordinate transformation can be carried out on the corrected data by using a proportional relation between the first distance and the second distance data through triangulation or other geometric calculation methods, and a corresponding area of the corrected data on the detector is determined, namely an exposure feedback area corresponding to the region of interest on the detector.
In some embodiments, the first distance between the radiation source and the body surface of the target exposure object and the second distance between the radiation source and the detector may be acquired under the condition that the image acquisition device acquires the distance information between the image acquisition device and the body surface of the target exposure object.
In other embodiments, a first distance between the source and the body surface of the target exposure object and a second distance between the source and the detector may be acquired by a ranging sensor.
In the above embodiment, the exposure feedback area corresponding to the region of interest on the detector is determined according to the first distance, the second distance, the first position data and the second position data, and data support is provided for the subsequent determination of the region for reading the exposure feedback signal.
In some embodiments, referring to fig. 5, determining the exposure feedback area corresponding to the region of interest on the detector according to the first distance, the second distance, the first position data, and the second position data may include the steps of:
s510, cone beam effect correction is carried out on the first position data according to the second position data, and corrected position data of the region of interest are determined.
S520, performing coordinate transformation on the corrected position data according to the first distance and the second distance to obtain an exposure feedback area corresponding to the region of interest on the detector.
The cone beam effect may refer to a phenomenon of positional deviation caused by physical characteristics of an X-ray beam during propagation in X-ray imaging, among others. This phenomenon can lead to offset and distortion of the target position in the imaging, affecting the accuracy and precision of the imaging.
Specifically, after the first position data and the second position data are determined, a correction formula of cone beam effect correction can be determined by performing mathematical modeling and correction calculation by using the geometric propagation relationship of the X-ray beam and combining the first distance and the second distance. And carrying out cone beam effect correction on the first position data according to the second position data, and adjusting the first position data through a correction formula or a correction algorithm to reduce position deviation caused in the X-ray beam transmission process and obtain corrected position data of the region of interest. A suitable coordinate transformation method is determined according to the specific medical imaging device and coordinate system. Parameters required for coordinate conversion are calculated from the first distance and the second distance. This may require consideration of the geometrical relationship between the imaging plane and the object, the source of radiation, and the characteristic parameters of the detector. And performing coordinate conversion on the corrected position data by using the calculated coordinate conversion parameters to obtain the coordinates of the exposure feedback area corresponding to the region of interest on the detector. The exposure feedback area may be determined based on the coordinates of the exposure feedback area.
For example, the coordinates of the detector center point in the camera coordinate system at SOD distances can be obtained Xpc, ypc. The first position data is carried into the following formula to carry out cone beam effect correction, and corrected position data of the region of interest is determined:
SID/SOD*(Xj-Xpc)+Xpc=Xpj
SID/SOD*(Yj-Ypc)+Ypc=Ypj
wherein SID is the second distance between the ray source and the detector, and SOD is the first distance between the body surface of the target exposure object and the ray source. Xpc is the abscissa of the detector center point under the camera coordinate system at SOD distance, ypc is the ordinate of the detector center point under the camera coordinate system at SOD distance. Xj is the abscissa of the vertex of the region of interest under SOD distance and considering the cone beam in the camera coordinate system, yj is the ordinate of the vertex of the region of interest under SOD distance and considering the cone beam in the camera coordinate system, xpj is the abscissa of the vertex of the region of interest under SOD distance in the camera coordinate system, ypj is the ordinate of the vertex of the region of interest under SOD distance in the camera coordinate system. Wherein j represents a coordinate point index, and j ranges from a positive integer of 1 to 4, and is an upper left corner, a lower left corner, an upper right corner and a lower right corner in sequence.
The corrected position data of the region of interest in the camera coordinate system at SOD distances may then be converted to position data of the region of interest in the camera coordinate system at SID distances. And finally, according to the relation between the camera coordinate system and the detector coordinate system, carrying out coordinate conversion on the position data of the region of interest under the SID distance under the camera coordinate system, and obtaining the exposure feedback area corresponding to the region of interest on the detector.
In the above embodiment, cone beam effect correction is performed on the first position data according to the second position data, and corrected position data of the region of interest is determined, so that imaging accuracy can be improved, and imaging quality can be optimized. And performing coordinate conversion on the corrected position data according to the first distance and the second distance to obtain an exposure feedback area corresponding to the region of interest on the detector, and providing data support for the subsequent determination of the region for reading the exposure feedback signal.
In some embodiments, referring to fig. 6a, acquiring the first position data of the region of interest may comprise the steps of:
s610, shooting the current exposure scene to obtain an initial scene image.
The current exposure scene may be an image or video containing human tissue information acquired by a camera. The initial scene image may be an image obtained after photographing a currently exposed scene, and may be understood as a scene captured at a specific point in time without any processing or adjustment. Typically, the initial scene image will be the basis for subsequent processing and analysis, such as viewing angle transformations, calibration models, or identification of regions of interest.
Specifically, since the image capturing apparatus is installed in such a manner that it is required to avoid the radiation irradiation field, the image capturing apparatus may be installed on one side of the beam limiter, or may be installed at a position where the target exposure object and the detector can be photographed (such as a roof). The image acquisition device is aligned to the scene to be photographed, so that the target exposure object and the detector, namely the current exposure scene, can be completely displayed in the view-finding frame. Shooting the current exposure scene through image acquisition equipment to obtain an initial scene image.
In some implementations, the image capture device may be a depth camera that supports acquisition of depth information. The depth camera shoots the current exposure scene to obtain an initial scene image, and meanwhile, the distance information between the depth camera and the body surface of the target exposure object can be obtained, so that the first distance between the ray source and the body surface of the target exposure object is obtained.
In other embodiments, the image capture device may be a conventional two-dimensional camera. And shooting the current exposure scene through a two-dimensional camera to obtain an initial scene image. And a first distance between the source and the body surface of the target exposure object may be obtained by infrared ranging or other means.
The image capturing device may be at least one of a camera, a monitoring camera, an industrial camera, and a fisheye camera.
S620, performing view transformation on the initial scene image through a transformation matrix to obtain a corrected scene image which is used as an exposure scene image.
The transformation matrix may be a mathematical tool for performing a perspective transformation on the image. Perspective transformation may refer to the process of transforming an image from one perspective or coordinate system to another. The corrected scene image may be an image obtained after the viewing angle conversion operation, so that the observer can observe the image at different angles or modes.
In some cases, since the image capturing apparatus is installed in such a manner that it needs to avoid the radiation irradiation area, the initial scene image captured by the image capturing apparatus may take on an inclined state. The positions at which the exposure feedback marks are displayed on the body surfaces of the target exposure objects of different body thicknesses are actually different due to the inclination, which may cause a deviation to easily occur when the positions of the regions of interest are set. Therefore, it is necessary to perform a view angle conversion on the initial scene image, and the deviation caused by the oblique view angle is corrected.
Specifically, a first distance between a body surface of the target exposure object and the radiation source is determined. For an initial sample scene image taken at an oblique view angle with a first distance determined, a calculation is required according to a performed view angle transformation operation (e.g., translation), and a corresponding transformation matrix is determined, which is used to describe the view angle transformation operation required for the initial sample scene image. And applying the calculated transformation matrix to the initial scene image, and obtaining a corrected scene image through matrix multiplication or other image processing technologies. The corrected scene image can be used as an exposure scene image so as to reflect the actual exposure scene condition more accurately and provide a more reliable data basis for subsequent exposure scene image processing.
The image acquisition device may be mounted on one side of the beam limiter, for example. Referring to fig. 6b, fig. 6b may be an image captured by the image capturing device when the second distance between the radiation source and the detector is one meter. For no object placed on the detector surface, the detector center line in FIG. 6b coincides with the detector surface illumination reticle.
When an object is placed on the detector surface, referring to fig. 6c, the position of the center 602 of the illumination cross line of the detector surface deviates significantly due to the inclination of the view angle, and the variation of the placed load causes variation of the deviation, which varies with the distance between the source and the detector. While the smaller the second distance between the source and the detector the more pronounced the offset.
In some embodiments, the initial scene image may be perspective transformed using a relatively common non-rigid transformation algorithm such as polynomial registration. The corresponding characteristic points in the image of the image acquisition equipment under the oblique view angle and the corresponding characteristic points in the image of the image acquisition equipment under the central view angle are marked respectively to form a certain number of characteristic point matching pairs, and the transformation matrix is calculated by utilizing the principle of an image registration algorithm. The characteristic points can be corner points which are fixed on the surface of the detector and are spaced from the checkerboard, and the characteristic points of the detector can also be selected. Changing a first distance between the body surface of the target exposure object and the ray source, and then repeating the matching point acquisition process to determine a plurality of groups of characteristic point matching pairs.
And carrying out transformation adjustment on each pixel in the initial scene image through the transformation matrix to obtain a corrected scene image. The viewing angle transformation can be performed by the following formula:
I′(x,y)=TF(I(x,y),SOD)
wherein, I (x, y) is an initial scene image, TF represents a transformation model related to distance, the model is obtained after calibration and calibration of offline acquisition data, I' (x, y) is a corrected scene image, and SOD is a first distance between the body surface of a target exposure object and a ray source.
In other embodiments, a deep learning approach may be used to perform perspective transformation on the initial scene image.
S630, segmenting the region of interest based on the exposure scene image to obtain first position data.
Specifically, a suitable image segmentation algorithm is selected, for example, a segmentation method based on pixels (such as a threshold segmentation method, a region growing method, an edge detection method and the like) or a segmentation method based on regions (such as a segmentation algorithm based on graph theory, a segmentation algorithm based on clustering and the like), and the exposure scene image is segmented into the regions of interest, so that a detection frame corresponding to the regions of interest can be obtained. And obtaining the first position data corresponding to the region of interest based on the detection frame corresponding to the region of interest.
In some embodiments, the region of interest may be segmented by the region of interest identification segmentation model on the exposure scene image, so as to directly obtain the first position data corresponding to the region of interest. The specific construction of the region of interest identification segmentation model is as follows: and taking the swing image obtained through visual angle transformation as a training sample image. And under the condition of considering the differences of the distribution densities of the human tissues, labeling the region of interest of the training sample image according to the tissue characteristics of interest of the examination part, and taking the labeled detection frame as a label. The training sample image is input into an initial region of interest identification segmentation model to segment a human body part, for example, the training sample image is divided into a plurality of parts such as a head, a chest, an abdomen, hands, feet, limbs and the like to obtain a segmentation result, and then the region of interest in the training sample image can be determined by combining APR projection experience on the basis of the segmentation result to obtain a sample detection frame corresponding to the region of interest and sample position data corresponding to the sample detection frame. For example, based on the segmentation result, the training sample image can be determined to be the chest radiography position by combining with APR projection experience, and the bilateral lung fields in the segmentation result are the interested areas. Then, a loss value of the initial region-of-interest identification segmentation model can be determined based on the region-of-interest sample detection frame and the label, and parameters of the initial region-of-interest identification segmentation model are updated based on the model loss value. And by analogy, training the updated initial region of interest identification segmentation model continuously, and obtaining the target region of interest identification segmentation model when the model training stopping condition is reached. The model training stopping condition may be that the model loss value tends to converge, or that the training turns reach a preset number of turns. It should be noted that, the APR projection experience may be a set of program settings for the X-ray examination of the designated region, and generally includes factors such as exposure conditions, field size, image processing parameters, and the like.
In other embodiments, the segmentation results may be obtained by segmenting the body part of the training sample image using a region-of-interest segmentation model, such as dividing the training sample image into multiple parts, such as head, chest, abdomen, hands, feet, limbs, etc. And then, on the basis of the segmentation result, combining with APR projection experience, and determining an interested region in the exposure scene image in the segmentation result in a man-machine interaction mode to obtain corresponding first position data.
In the above embodiment, the current exposure scene is photographed to obtain the initial scene image, the initial scene image is subjected to the view angle transformation by the transformation matrix to obtain the corrected scene image, and the corrected scene image is used as the exposure scene image, and the visual errors caused by the photographing angle, the human body thickness and the like are reduced by the view angle transformation, so that the feedback signal selection is more accurate. And dividing the region of interest based on the exposure scene image to obtain first position data, and providing a data basis for the subsequent determination of an exposure feedback region.
The embodiment of the specification also provides a display method of the exposure feedback area, wherein a first distance is reserved between the body surface of the target exposure object and the ray source, the detector is preset with a specified characteristic point, and the specified characteristic point corresponds to a distance transformation function. With a second distance between the source and the detector, referring to fig. 7, for example, the exposure feedback area display method may include the steps of:
S702, shooting a current exposure scene to obtain an initial scene image.
S704, performing view transformation on the initial scene image through a transformation matrix to obtain a corrected scene image serving as an exposure scene image.
S706, segmenting the region of interest based on the exposure scene image to obtain first position data.
S708, substituting the first distance into the distance transformation function to obtain projection position data of the detector on the target plane.
Specifically, the distance transformation function may be determined by: acquiring a plurality of scene sample images; the scene sample image is obtained by performing view angle conversion on a scene image obtained by shooting; determining third position data of the specified feature point in the scene sample image; and fitting processing is carried out based on the third position data and the source image SID distance corresponding to the scene sample image, so as to obtain a distance transformation function.
S710, performing geometric transformation based on the projection position data to obtain second position data.
S712, cone beam effect correction is carried out on the first position data according to the second position data, and corrected position data of the region of interest are determined.
And S714, performing coordinate transformation on the corrected position data according to the first distance and the second distance to obtain an exposure feedback area corresponding to the region of interest on the detector.
S716, displaying the exposure scene image.
The exposure scene image is provided with a target exposure object, an interested region of the target exposure object is correspondingly provided with an exposure feedback region on the detector, and a feedback signal in the exposure feedback region is used for judging whether exposure is stopped or not.
S718, displaying an exposure feedback mark in the exposure scene image.
The exposure feedback mark is used for indicating the projection condition of the exposure feedback area on a target plane, and the target plane is the plane where the body surface of the target exposure object is located.
S720, displaying at least one of a projection identifier, an interesting identifier and an irradiation field identifier of the imaging range of the detector in the exposure scene image.
Wherein the detector imaging range projection identifier is used for indicating the projection condition of the detector on the target plane; the interesting mark is used for indicating the distribution situation of the interesting area; the illumination field identifier is used for indicating the illumination field range determined by the beam limiter.
Referring to fig. 8, the exposure feedback area display device 800 according to the embodiment of the present disclosure includes: a scene image display module 810, a feedback identification display module 820.
A scene image display module 810 for displaying an exposed scene image; the exposure scene image is provided with a target exposure object, an interested region of the target exposure object is correspondingly provided with an exposure feedback region on the detector, and a feedback signal in the exposure feedback region is used for judging whether exposure is stopped or not;
A feedback identification display module 820 for displaying an exposure feedback identification in the exposure scene image; the exposure feedback mark is used for indicating the projection condition of the exposure feedback area on a target plane, and the target plane is the plane where the body surface of the target exposure object is located.
For a specific description of the exposure feedback area display device, reference may be made to the above description of the exposure feedback area display method, and the description is not repeated here.
In some embodiments, a medical imaging device is provided comprising a memory having a computer program stored therein and a processor, which when executing the computer program, implements the method steps of the above embodiments.
The present description embodiment provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor, implements the steps of the method of any of the above embodiments.
An embodiment of the present specification provides a computer program product comprising instructions which, when executed by a processor of a computer device, enable the computer device to perform the steps of the method of any one of the embodiments described above.
It should be noted that the logic and/or steps represented in the flowcharts or otherwise described herein, for example, may be considered as a ordered listing of executable instructions for implementing logical functions, and may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium may even be paper or other suitable medium upon which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
Claims (11)
1. An exposure feedback area display method, characterized in that the method comprises:
displaying an exposure scene image; the exposure scene image is provided with a target exposure object, an interested region of the target exposure object is correspondingly provided with an exposure feedback region on the detector, and a feedback signal in the exposure feedback region is used for judging whether exposure is stopped or not;
displaying an exposure feedback identifier in the exposure scene image; the exposure feedback mark is used for indicating the projection condition of the exposure feedback area on a target plane, and the target plane is the plane where the body surface of the target exposure object is located.
2. The method according to claim 1, wherein the method further comprises:
displaying at least one of a projection identifier, an interesting identifier and an irradiation field identifier of the imaging range of the detector in the exposure scene image; wherein the detector imaging range projection identifier is used for indicating the projection condition of the detector on the target plane; the interesting mark is used for indicating the distribution condition of the interesting area; the irradiation field identifier is used for indicating the irradiation field range determined by the beam limiter.
3. The method of claim 1, wherein the exposure feedback area is determined by:
Acquiring first position data corresponding to the region of interest, determining second position data corresponding to projection of the detector on the target plane, and determining an exposure feedback area corresponding to the region of interest on the detector according to the first position data and the second position data; or alternatively
Responsive to a selected operation on a region of interest, a corresponding exposure feedback area of the region of interest on the detector is determined.
4. A method according to claim 3, wherein the body surface of the target exposure object has a first distance from the radiation source, and the detector is preset with a specified feature point, and the specified feature point corresponds to a distance transformation function; the determining second position data corresponding to the projection of the detector on the target plane comprises:
substituting the first distance into the distance transformation function to obtain projection position data of the detector on the target plane;
and performing geometric transformation based on the projection position data to obtain the second position data.
5. The method of claim 4, wherein the distance transform function is determined by:
Acquiring a plurality of scene sample images; the scene sample image is obtained by performing view angle transformation on a scene image obtained by shooting;
determining third position data of the specified feature point in the scene sample image;
and fitting processing is carried out based on the third position data and the source image distance corresponding to the scene sample image, so as to obtain the distance transformation function.
6. A method according to claim 3, wherein the target exposure object has a first distance between the body surface and the source and a second distance between the source and the detector; the determining, according to the first position data and the second position data, an exposure feedback area corresponding to the region of interest on the detector includes:
and determining an exposure feedback area corresponding to the region of interest on the detector according to the first distance, the second distance, the first position data and the second position data.
7. The method of claim 6, wherein the determining the corresponding exposure feedback area of the region of interest on the detector based on the first distance, the second distance, the first position data, and the second position data comprises:
Performing cone beam effect correction on the first position data according to the second position data, and determining corrected position data of the region of interest;
and carrying out coordinate transformation on the corrected position data according to the first distance and the second distance to obtain an exposure feedback area corresponding to the region of interest on the detector.
8. A method according to claim 3, wherein said acquiring first location data of said region of interest comprises:
shooting a current exposure scene to obtain an initial scene image;
performing view transformation on the initial scene image through a transformation matrix to obtain a corrected scene image serving as the exposure scene image;
and dividing the region of interest based on the exposure scene image to obtain the first position data.
9. An exposure feedback area display device, the device comprising:
the scene image display module is used for displaying an exposure scene image; the exposure scene image is provided with a target exposure object, an interested region of the target exposure object is correspondingly provided with an exposure feedback region on the detector, and a feedback signal in the exposure feedback region is used for judging whether exposure is stopped or not;
The feedback mark display module is used for displaying an exposure feedback mark in the exposure scene image; the exposure feedback mark is used for indicating the projection condition of the exposure feedback area on a target plane, and the target plane is the plane where the body surface of the target exposure object is located.
10. A medical imaging device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 8 when the computer program is executed.
11. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311694778.2A CN117653156A (en) | 2023-12-11 | 2023-12-11 | Exposure feedback area display method and device, medical imaging equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311694778.2A CN117653156A (en) | 2023-12-11 | 2023-12-11 | Exposure feedback area display method and device, medical imaging equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117653156A true CN117653156A (en) | 2024-03-08 |
Family
ID=90078566
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311694778.2A Pending CN117653156A (en) | 2023-12-11 | 2023-12-11 | Exposure feedback area display method and device, medical imaging equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117653156A (en) |
-
2023
- 2023-12-11 CN CN202311694778.2A patent/CN117653156A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10716530B2 (en) | Method and system of automated X-ray positioning and collimation control on hand and foot scan | |
JP4484462B2 (en) | Method and apparatus for positioning a patient in a medical diagnostic or therapeutic device | |
CN103181775B (en) | For detecting the method and system of patient body's cursor position | |
US10849589B2 (en) | X-ray imaging apparatus and control method thereof | |
CN111712198B (en) | System and method for mobile X-ray imaging | |
CN106999727A (en) | The method for demarcating the patient monitoring system for radiotherapy equipment | |
CN106659448A (en) | Method and system for configuring an x-ray imaging system | |
KR20170024560A (en) | X-ray image apparatus nad control method for the same | |
CN112450955A (en) | CT imaging automatic dose adjusting method, CT imaging method and system | |
JP6970203B2 (en) | Computed tomography and positioning of anatomical structures to be imaged | |
JP2006141904A (en) | Radiographic apparatus | |
US11207048B2 (en) | X-ray image capturing apparatus and method of controlling the same | |
US11564651B2 (en) | Method and systems for anatomy/view classification in x-ray imaging | |
CN114299547A (en) | Method and system for determining region of target object | |
US20230102782A1 (en) | Positioning method, processing device, radiotherapy system, and storage medium | |
CN117653156A (en) | Exposure feedback area display method and device, medical imaging equipment and storage medium | |
US11553891B2 (en) | Automatic radiography exposure control using rapid probe exposure and learned scene analysis | |
EP4395652A1 (en) | Object visualisation in x-ray imaging | |
JP2021191404A (en) | Image processing system, radial ray imaging system, image processing method, and image processing program | |
JP6991751B2 (en) | Radiation imaging system, radiography method and program | |
CN114067994A (en) | Target part orientation marking method and system | |
JP7370933B2 (en) | Control device, radiation imaging system, control processing method, and control processing program | |
US11779290B2 (en) | X-ray imaging system and x-ray imaging apparatus | |
EP3968215A1 (en) | Determining target object type and position | |
CN116831750A (en) | Method and device for adjusting beam limiter, medical imaging device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |