CN111815514A - Image acquisition method and device, readable storage medium and image acquisition equipment - Google Patents

Image acquisition method and device, readable storage medium and image acquisition equipment Download PDF

Info

Publication number
CN111815514A
CN111815514A CN202010555725.2A CN202010555725A CN111815514A CN 111815514 A CN111815514 A CN 111815514A CN 202010555725 A CN202010555725 A CN 202010555725A CN 111815514 A CN111815514 A CN 111815514A
Authority
CN
China
Prior art keywords
image
image acquisition
human body
coordinate system
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010555725.2A
Other languages
Chinese (zh)
Inventor
李海春
李天华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Medical Systems Co Ltd
Original Assignee
Neusoft Medical Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Medical Systems Co Ltd filed Critical Neusoft Medical Systems Co Ltd
Priority to CN202010555725.2A priority Critical patent/CN111815514A/en
Publication of CN111815514A publication Critical patent/CN111815514A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Abstract

The disclosure relates to an image acquisition method, an image acquisition device, a readable storage medium and an image acquisition device. The method comprises the following steps: acquiring splicing range information input by a user on a human body image; determining each image acquisition position of the image acquisition equipment in a world coordinate system according to the splicing range information; controlling the image acquisition equipment to respectively acquire images of the human body part at each image acquisition position; and splicing the images of the human body part acquired at each image acquisition position. Each image acquisition position can be determined only according to splicing range information input by a user on the human body image, the preparation process for acquiring the images is simplified, the preparation time is shortened, and the image acquisition efficiency is improved. Moreover, because the user does not need to move the image acquisition equipment, the intellectualization of image acquisition is improved.

Description

Image acquisition method and device, readable storage medium and image acquisition equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image acquisition method, an image acquisition device, a readable storage medium, and an image acquisition device.
Background
Because the large-scale panoramic medical image can better help doctors to perform more comprehensive and intuitive evaluation on the focus and surrounding tissues, acquiring the panoramic medical image has wide application in medical image research, for example, evaluating spinal lesions by acquiring a magnetic resonance image. However, since the acquisition range of the detector is limited, for example, the detection range of a flat panel detector is usually 43cm × 43cm, if one frame of panoramic spine image is to be obtained, two or three local images may need to be acquired, and two or three local images are stitched to obtain the panoramic spine image, so that stitching multiple frames of medical images to obtain the panoramic medical image required by the doctor is necessary in medical image research.
However, the method for acquiring images provided in the related art has the disadvantages of long time consumption and low efficiency.
Disclosure of Invention
The disclosure aims to provide an image acquisition method, an image acquisition device, a readable storage medium and an image acquisition device, so as to improve the image acquisition efficiency.
In order to achieve the above object, the present disclosure provides an image acquisition method, including: acquiring splicing range information input by a user on a human body image, wherein the human body image comprises a human body part to be subjected to image acquisition, and the splicing range information is used for indicating the range of the human body part to be subjected to image acquisition on the human body image; determining each image acquisition position of the image acquisition equipment in a world coordinate system according to the splicing range information; controlling the image acquisition equipment to respectively acquire images of the human body part at each image acquisition position; and splicing the images of the human body part acquired at each image acquisition position.
Optionally, the image acquisition device comprises an image acquisition device and a detector; determining each image acquisition position of the image acquisition equipment under a world coordinate system according to the splicing range information, wherein the image acquisition position comprises the following steps: determining an image acquisition starting position and an image acquisition ending position according to the distance between the image acquisition device and the detector, the current central position of the detector, the transformation relation between an image coordinate system and a world coordinate system and the splicing range information; and determining each image acquisition position of the image acquisition equipment in the world coordinate system according to a first preset overlapping area of two adjacent frames of images in the world coordinate system, the image acquisition starting position, the image acquisition ending position and the detection area of the detector.
Optionally, the center of the shooting visual field of the image acquisition device is consistent with the center of the shooting visual field of the detector, and the transformation relation between the image coordinate system and the world coordinate system is determined by the following steps: controlling the image acquisition device to acquire a first image under the condition that the distance between the image acquisition device and the detector is a first distance, and controlling the image acquisition device to acquire a second image under the condition that the distance between the image acquisition device and the detector is a second distance, wherein the first distance and the second distance are different; determining a first projection area of a detection area of the detector in the first image and a first number of pixels comprised by the first projection area; determining a second projection area of the detection area of the detector in the second image and a second number of pixels comprised by the second projection area; determining a transformation parameter between an image coordinate system and a world coordinate system according to the detection area of the detector, the first pixel quantity, the second pixel quantity, the first distance and the second distance; and determining the transformation relation between the image coordinate system and the world coordinate system according to the transformation parameters between the image coordinate system and the world coordinate system.
Optionally, the image acquisition device comprises a detector; the splicing range information comprises a splicing starting position and a splicing ending position; determining each image acquisition position of the image acquisition equipment under a world coordinate system according to the splicing range information, wherein the image acquisition position comprises the following steps: determining a third projection area of the detection area of the detector in the human body image and a third pixel number included in the third projection area; determining each image acquisition position of the image acquisition equipment in an image coordinate system according to a second preset overlapping area, the splicing starting position, the splicing ending position and the third pixel number of two adjacent frames of images in the image coordinate system; and determining each image acquisition position of the image acquisition equipment in a world coordinate system according to each image acquisition position of the image acquisition equipment in the image coordinate system.
Optionally, the method further comprises: determining image acquisition parameters corresponding to each image acquisition position; the control the image acquisition equipment is in each image acquisition position department respectively to the human body position carries out image acquisition, includes: and controlling the image acquisition equipment to be at the image acquisition position according to each image acquisition position, and acquiring images of the human body part according to image acquisition parameters corresponding to the image acquisition position.
Optionally, the determining the image acquisition parameters corresponding to each of the image acquisition positions includes: determining a projection area of a detection area of the image acquisition equipment at each image acquisition position in the human body image; identifying the human body part included in each projection area; determining an anatomy procedure type X-ray photographing part corresponding to each projection area according to the human body part included in each projection area; and determining the image acquisition parameters corresponding to each image acquisition position according to the corresponding relation between the anatomical procedure type X-ray photography part and the image acquisition parameters.
Optionally, the identifying the human body part included in each of the projection regions includes: and identifying the human body part included in each projection area through a human body part identification model.
Optionally, the method further comprises: determining an image processing parameter corresponding to each image acquisition position, wherein the image processing parameter includes at least one of the following: amplifying parameters, noise reduction parameters, contrast parameters and image enhancement parameters; processing the images collected at the image collecting positions according to the image processing parameters corresponding to the image collecting positions aiming at each image collecting position; the stitching of the images of the human body part collected at each of the image collecting positions includes: and splicing the images obtained after the processing.
The second aspect of the present disclosure further provides an image capturing apparatus, including: the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is configured to acquire splicing range information input by a user on a human body image, the human body image comprises a human body part to be subjected to image acquisition, and the splicing range information is used for indicating the range of the human body part to be subjected to image acquisition on the human body image; the first determining module is configured to determine each image acquisition position of the image acquisition equipment in a world coordinate system according to the splicing range information; the first control module is configured to control the image acquisition equipment to acquire images of the human body part at each image acquisition position respectively; a stitching module configured to stitch the images of the human body part acquired at each of the image acquisition positions.
The third aspect of the present disclosure also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method provided by the first aspect of the present disclosure.
The fourth aspect of the present disclosure also provides an image capturing apparatus, including: a memory having a computer program stored thereon; a processor for executing the computer program in the memory to implement the steps of the method provided by the first aspect of the present disclosure.
By the technical scheme, each image acquisition position of the image acquisition equipment under a world coordinate system can be determined by utilizing splicing range information input by a user on a human body image, and the image acquisition equipment is controlled to acquire images of human body parts at each image acquisition position. Therefore, each image acquisition position can be determined only according to the splicing range information input by the user on the human body image, and compared with a scheme that the user needs to move the image acquisition equipment to determine the splicing start position and the splicing end position in the related technology, the image acquisition method provided by the disclosure can simplify the preparation process of acquiring the image, reduce the preparation time and improve the image acquisition efficiency. Moreover, because the user does not need to move the image acquisition equipment, the intellectualization of image acquisition is improved.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
fig. 1 is a diagram illustrating a method for automatically acquiring an image according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating an image acquisition method according to an exemplary embodiment.
Fig. 3 is a flowchart illustrating a method of determining each image capture location of an image capture device in a world coordinate system according to an exemplary embodiment.
Fig. 4 is a diagram illustrating image capture task partitioning according to an exemplary embodiment.
FIG. 5 is a flow chart illustrating another method for determining each image capture location of an image capture device in a world coordinate system in accordance with an exemplary embodiment.
FIG. 6 is a flow chart illustrating another image acquisition method according to an exemplary embodiment.
Fig. 7 is a schematic diagram illustrating a correspondence of a detection region of an image capturing device to a human body part according to an exemplary embodiment.
Fig. 8 is a block diagram illustrating an image capturing device according to an exemplary embodiment.
FIG. 9 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
Before describing the embodiments of the present disclosure, a description will be given of a process of acquiring an image in the related art. The method for acquiring the image in the related art mainly includes a method for manually acquiring the image and a method for automatically acquiring the image.
In the method for manually acquiring the image, a doctor needs to move a bulb and a detector to acquire multiple frames of local images at different positions, and then the multiple frames of local images are spliced to obtain one frame of panoramic image required by the doctor. For example, if a doctor needs to acquire a thoracic-lumbar image of a patient, it needs to be formed by taking two partial images for stitching. Specifically, the doctor needs to position the patient to be in a proper position, and then adjusts the bulb and the detector according to the thoracic vertebrae of the patient to move the bulb and the detector to a first proper position (where the first proper position is the best position to acquire the thoracic vertebrae of the patient), and acquire a first frame of image. And then, adjusting the bulb tube and the detector according to the lumbar vertebra part of the patient so as to enable the bulb tube and the detector to move to a second proper position (wherein the second proper position is the optimal position for collecting the lumbar vertebra part of the patient), collecting a second frame of image, and finally splicing the first frame of image and the second frame of image to obtain a thoracic vertebra-lumbar vertebra image. Therefore, in the method of manually acquiring images, a doctor needs to reasonably estimate the image acquisition position of each frame. Since the doctor empirically determines the image capturing position of each frame, the accuracy of the determined position is poor, and the quality of the obtained thoracic-lumbar image is poor. Also, the physician is required to move the bulb and probe back and forth to determine the proper position, which is time consuming.
In the method for automatically acquiring images, an acquisition scheme is usually established in advance, for example, an Anatomical Programmed Radiography (APR) protocol for thoracic vertebrae-lumbar vertebrae, cervical vertebrae-thoracic vertebrae, and the like. And then, after selecting a proper APR protocol, performing motion positioning on the bulb tube and the detector according to the APR protocol to determine the splicing starting position and the splicing ending position. Determining the number of images to be acquired according to the splicing starting position, the splicing ending position and the preset overlapping area of two adjacent frames of images, calculating the acquisition position of each image, controlling the bulb tube and the detector to move to each position to acquire the images, and finally splicing the acquired multi-frame images. Illustratively, assuming the physician selects the thoracic-lumbar APR protocol, first, the physician manually adjusts the bulb and probe to determine a first suitable position, and again manually adjusts the bulb and probe to determine a second suitable position, and then, the first and second suitable positions are taken as the stitching start position and the stitching end position. Then, according to the stitching start position, the stitching end position, and the overlapping area, three frames of images to be acquired, such as the image shown on the left side in fig. 1, are determined. Finally, the three frames of images in the left image of fig. 1 are stitched to obtain the image shown in the right image of fig. 1.
Therefore, the method for automatically acquiring the image saves the time for positioning the ball tube and the detector to a certain extent. However, the doctor is required to manually move the bulb tube and the detector to determine the splicing start position and the splicing end position, so that the preparation process for obtaining the image is complicated, the time consumption is relatively long, and the image obtaining efficiency is low.
In view of this, the present disclosure provides an image capturing method, an image capturing apparatus, a readable storage medium, and an image capturing device, so as to improve the image capturing efficiency.
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
Fig. 2 is a flowchart illustrating an image acquisition method according to an exemplary embodiment. As shown in fig. 2, the method may include steps 201 to 204.
In step 201, stitching range information input by a user on a human body image is acquired.
The human body image comprises a human body part to be subjected to image acquisition, and the splicing range information is used for indicating the range of the human body part to be subjected to image acquisition on the human body image. For example, the human body part to be subjected to image acquisition is a thoracic vertebra-lumbar vertebra part, and the splicing range is a range of the thoracic vertebra-lumbar vertebra part on the human body image.
The embodiments of the user inputting the stitching range information on the human body image include, but are not limited to, the following two types:
in a first embodiment, the human body image is displayed in the human-computer interaction interface, and the user marks the range of the human body part to be subjected to image acquisition in the human body image through a mouse, or the human-computer interaction interface is a touch screen, and the user marks the range of the human body part to be subjected to image acquisition in the human body image through touching the touch screen. In this way, after the user inputs the stitching range information on the human body image in the human-computer interaction interface, correspondingly, the electronic device (e.g., the image acquisition device) executing the image acquisition method may acquire the stitching range information input on the human body image by the user based on a user behavior recognition technology in the related art.
In a second embodiment, the user directly marks the range of the human body part to be subjected to image acquisition in the human body image by using a preset mark, and the range is the splicing range information. Then, the human body image with the label is input into an electronic device (e.g., an image capturing device) executing the image obtaining method, and accordingly, the electronic device (e.g., the image capturing device) executing the image obtaining method can obtain the splicing range information input on the human body image by the user based on an image recognition technology.
In step 202, each image capturing position of the image capturing device in the world coordinate system is determined according to the stitching range information.
It should be noted that, in general, the acquisition range of the image acquisition device is larger than the detection range of the detector, so in the present disclosure, the human body image is acquired by the image acquisition device (e.g., a camera), and the image is acquired by the image acquisition device. In the present disclosure, the image acquisition device may be a medical image acquisition device, which may include, but is not limited to, a computed tomography device (CT), a magnetic resonance imaging device (MR), an ultrasound imaging device, a nuclear medicine imaging device (PET), a medical X-ray machine, and the like. In addition, in consideration of the image capturing principle, in the present disclosure, each image capturing position of the image capturing device determined in the world coordinate system may be understood as each image capturing position provided in the bulb and the detector.
The stitching range acquired in step 201 is the stitching range in the image coordinate system, and in the actual image acquisition process, the image acquisition device is controlled to move to different positions to acquire images of different human body parts, so that in the present disclosure, each image acquisition position of the image acquisition device in the world coordinate system needs to be determined according to the stitching range information. Determining each image capture position of the image capture device in the world coordinate system according to the stitching range information will be described in detail below.
In step 203, the image capturing device is controlled to capture an image of the human body at each image capturing position.
The images acquired by the image acquisition device at different positions are different. For example, the patient lies on the scanning bed according to the requirement of the doctor, the image acquired when the image acquisition device is located at the head position of the patient is the head image, and the image acquired when the image acquisition device is located at the femur position is the femur image. Therefore, by this step, the image acquired by the image acquisition device at each image acquisition position, that is, the image acquired by the image acquisition device for different human body parts, can be obtained. And the obtained multi-frame image is an image to be spliced.
In step 204, the images of the human body part acquired at each image acquisition position are stitched.
It should be noted that the multi-frame image collected in step 203 may be stitched by using a stitching method in the related art to obtain a stitched panoramic image.
By the technical scheme, each image acquisition position of the image acquisition equipment under a world coordinate system can be determined by utilizing splicing range information input by a user on a human body image, and the image acquisition equipment is controlled to acquire images of human body parts at each image acquisition position. Therefore, each image acquisition position can be determined only according to the splicing range information input by the user on the human body image, and compared with a scheme that the user needs to move the image acquisition equipment to determine the splicing start position and the splicing end position in the related technology, the image acquisition method provided by the disclosure can simplify the preparation process of acquiring the image, reduce the preparation time and improve the image acquisition efficiency. Moreover, because the user does not need to move the image acquisition equipment, the intellectualization of image acquisition is improved.
In order to facilitate better understanding of the image acquisition method provided by the present disclosure, the following is a full description of the image acquisition method.
First, a method of acquiring a human body image will be described.
As described above, the human body image is captured by an image capturing device (e.g., a camera, etc.), and the human body image includes a human body part to be image-captured. Therefore, in order to ensure that the image capturing device captures the human body image satisfying the above conditions, in a preferred embodiment, the image capturing device may be disposed in the direction of the speed limiter ray exit in the image capturing apparatus, and the direction in which the image capturing device captures the image may be the same as the detection direction of the detector, and further, the center position of the image capturing device may be the same as the center position of the detector. Thus, the image acquisition device can acquire the image including the human body part.
In addition, the closer the image acquisition device is to the patient, the more easily the acquired image is distorted, so that the user cannot accurately input the stitching range information in the human body image, and subsequently cannot accurately identify the human body part in the human body image. Therefore, in the present disclosure, before the human body image is collected, the image collecting apparatus may also be reset, that is, the rack provided with the image collecting device, the bulb tube, and the detector is controlled to be reset, the rack is moved to a preset position, and then the human body image is collected. When the frame moves to the preset position, the human body part of the patient on the scanning bed is distributed as centrally as possible relative to the image acquisition device, so that the image acquisition device can acquire the image of the whole body of the patient on the scanning bed.
The human body image including the human body part to be subjected to image acquisition can be acquired according to the above manner, and then the splicing range information input by the user on the human body image is acquired according to the method in step 201 in fig. 2.
The following describes in detail the manner of determining each image capturing position of the image capturing device in the world coordinate system according to the stitching range information in fig. 2.
Fig. 3 is a flowchart illustrating a method of determining each image capture location of an image capture device in a world coordinate system according to an exemplary embodiment. As shown in fig. 3, step 202 in fig. 2 may include the following steps.
In step 2021, an image capturing start position and an image capturing end position are determined according to the distance between the image capturing device and the detector, the current center position of the detector, the transformation relationship between the image coordinate system and the world coordinate system, and the stitching range information.
It should be noted that the image capturing device includes an image capturing apparatus and a detector. Assuming that the length direction of the human body is the y axis, the width of the human body is the x axis, and the thickness direction of the human body is the z axis when the patient lies on the scanning bed, as shown above, the central position of the image acquisition device is consistent with the central position of the detector, that is, the x axis and the y axis of the image acquisition device and the detector are consistent, and the distance between the image acquisition device and the detector is the distance in the z axis direction. In addition, when the image is captured, the image capturing device moves along the y-axis, and therefore, in the present disclosure, each image capturing position determined substantially determines the coordinate value of the image capturing device in the y-axis direction under the world coordinate system.
Wherein the transformation relationship between the image coordinate system and the world coordinate system can be predetermined by the following steps.
It should be noted that the distance SID between the image capturing device and the detector may vary, and the size in the real space represented by one pixel in the human body image may also vary, and the inventors found that the pixel in the image coordinate system has a non-linear relationship with the real size in the world coordinate system, and in the present disclosure, the transformation parameter between the image coordinate system and the world coordinate system may be obtained by calibrating the target with the known size.
(1) The method comprises the steps of controlling an image acquisition device to acquire a first image when the distance between the image acquisition device and a detector is a first distance, and controlling the image acquisition device to acquire a second image when the distance between the image acquisition device and the detector is a second distance, wherein the first distance is different from the second distance.
(2) A first projection area of a detection area of the detector in the first image and a first number of pixels comprised by the first projection area are determined.
(3) A second projection area of the detection area of the detector in the second image and a second number of pixels comprised by the second projection area are determined.
For example, in the case that the distance between the image acquisition device and the detector is a first distance, each vertex position of the detection region of the detector in the world coordinate system is transformed to each vertex position in the camera coordinate system through a rigid body, then each vertex position in the camera coordinate system is transformed to each vertex position in the image coordinate system through perspective projection, and then a first projection region of the detection region of the detector in the first image is determined according to each vertex position in the image coordinate system. In a similar manner, in the case that the distance between the image acquisition device and the detector is the second distance, a second projection area of the detection area of the detector in the second image can also be determined. In addition, the image mark straight line measuring method can be adopted to determine the first pixel number included in the first projection area and the second pixel number included in the second projection area.
(4) Determining a transformation parameter between an image coordinate system and a world coordinate system according to a detection area of a detector, the first pixel quantity, the second pixel quantity, the first distance and the second distance;
for example, the transformation parameters between the image coordinate system and the world coordinate system may be determined by the following formula:
Figure BDA0002544170460000101
Figure BDA0002544170460000102
among them, the DetectorWW1 and W2 are the first pixel number and the second pixel number respectively, SID1 and SID2 are the first distance and the second distance respectively, fA、fBIs a transformation parameter between the image coordinate system and the world coordinate system.
(5) And determining the transformation relation between the image coordinate system and the world coordinate system according to the transformation parameters between the image coordinate system and the world coordinate system.
Determining the transformation parameter f between the image coordinate system and the world coordinate system in the above-described mannerA、fBThen, the transformation relation between the image coordinate system and the world coordinate system can be determined:
Figure BDA0002544170460000111
wherein Res represents the corresponding physical size of the unit pixel, and SID represents the distance between the image acquisition device and the detector.
After the transformation relationship between the image coordinate system and the world coordinate system is determined, the image acquisition starting position and the image acquisition ending position can be determined according to the distance between the image acquisition device and the detector, the current center position of the detector, the transformation relationship between the image coordinate system and the world coordinate system and the splicing range information. The splicing range information comprises splicing start position information and splicing end position information under the image coordinates.
For example, a y-axis coordinate value y1 of a pixel located at the center position of the human body image in the image coordinate system, a y-axis coordinate value y2 of a stitching start position represented by the stitching start position information, a physical size Res corresponding to a unit pixel when the distance between the image acquisition device and the detector is the current distance, and a y-axis coordinate value y3 of the current center position of the detector are determined, and the stitching start position in the world coordinate system, which is the image acquisition start position, is obtained according to a formula (y1-y2) × Res + y 3. Similarly, the end position of the stitching under the world coordinate system can be obtained according to a similar method, namely the end position of the image acquisition.
In step 2022, each image capturing position of the image capturing device in the world coordinate system is determined according to the first predetermined overlapping area, the image capturing start position, the image capturing end position, and the detection area of the detector of the two adjacent frames of images in the world coordinate system.
The first preset overlapping area may be preset based on experience of medical staff, or may be a default area of the image stitching technology. For example, the first preset overlap area may have a value of 8 cm. The smaller the first preset overlapping area is, the smaller the number of images required to be acquired is. The larger the first preset overlap area is, the larger the number of acquired images is, but the higher the stitching accuracy is. Therefore, the first preset overlapping area is not particularly limited in the present disclosure, and in practical applications, the first preset overlapping area may be determined based on actual requirements.
Illustratively, first, the number of image capturing tasks between the image capturing start position start1 and the image capturing end position end1 is determined. By way of example, the number of image acquisition tasks may be determined by the following formula:
Figure BDA0002544170460000121
n1 is the number of image acquisition tasks, ceil () represents an upward rounding function, length1 represents the range between the image acquisition starting position and the image acquisition ending position, maxooverlap 1 represents a first preset overlapping area of two adjacent frames of images in a world coordinate system, and detectorsize 1 represents the detection area of a detector.
Then, when each image acquisition task is executed, the acquisition position of the image acquisition equipment in the world coordinate system is determined according to the following formula, namely, each image acquisition position of the image acquisition equipment in the world coordinate system is determined:
Figure BDA0002544170460000122
wherein, P1(i) represents the ith image acquisition position under the world coordinate system, and the value range of i is [1, N1 ].
Therefore, each image acquisition position of the image acquisition equipment in the world coordinate system can be determined.
In addition, after the number of image acquisition tasks is determined, the actual overlapping range Gap1 of two adjacent frames of images under the world coordinate system can be further determined. For example, the determination can be performed by the following formula, wherein each parameter in the formula is explained in the foregoing, and is not described herein again.
Figure BDA0002544170460000123
Fig. 4 is a diagram illustrating image capture task partitioning according to an exemplary embodiment. As shown in fig. 4, three image capturing tasks are divided between the image capturing start position 1 and the image capturing end position end1, each image capturing task corresponds to an image capturing position, for example, the image capturing positions corresponding to the three image capturing tasks are P1(1), P1(2), and P1 (3). The actual overlapping range of two adjacent frames of images is Gap 1.
In the embodiment shown in fig. 3, the stitching range information in the image coordinate system is first transformed into the image capturing start position and the image capturing end position in the world coordinate system, and then each image capturing position of the image capturing device in the world coordinate system is determined.
In another embodiment, each image capturing position of the image capturing device in the image coordinate system may be determined first, and then each image capturing position of the image capturing device in the world coordinate system may be determined by using the transformation relationship between the image coordinate system and the world coordinate system. Specifically, referring to fig. 5, fig. 5 is a flowchart illustrating another method for determining each image capturing position of an image capturing device in a world coordinate system according to an exemplary embodiment. As shown in fig. 5, step 202 in fig. 2 may include the following steps.
In step 2023, a third projection region of the detection region of the detector in the human body image, and a third number of pixels included in the third projection region are determined.
As indicated above, the third projection area of the detection area of the detector in the image of the body can be determined using a correlation technique. And determining a third number of pixels using an image marker straight line measurement method.
In step 2024, each image capturing position of the image capturing device in the image coordinate system is determined according to the second preset overlapping area, the stitching start position, the stitching end position, and the third pixel number of the two adjacent frames of images in the image coordinate system.
For example, each image capturing position of the image capturing device in the image coordinate system can be determined by the following formula:
Figure BDA0002544170460000131
wherein N2 is the number of image capturing tasks, N2 is equal to N1, length2 represents the range between the stitching start position and the stitching end position, maxooverlap 2 represents the second preset overlapping area of two adjacent frames of images in the image coordinate system, and detectorsize 2 represents the third number of pixels.
Then, when each image acquisition task is executed, the acquisition position of the image acquisition equipment in the image coordinate system is determined according to the following formula, namely, each image acquisition position of the image acquisition equipment in the image coordinate system is determined:
Figure BDA0002544170460000132
wherein, start2 represents the stitching start position, P2(j) represents the jth image acquisition position in the image coordinate system, and the value range of j is [1, N2 ].
Similarly, after the number of image capturing tasks is determined, the actual overlapping range Gap2 of two adjacent frames of images in the image coordinate system may be further determined. For example, the determination can be performed by the following formula, wherein each parameter in the formula is explained in the foregoing, and is not described herein again.
Figure BDA0002544170460000141
In step 2025, each image capturing position of the image capturing device in the world coordinate system is determined according to each image capturing position of the image capturing device in the image coordinate system.
Related techniques can be used to convert each image capture location of the image capture device in the image coordinate system to each image capture location of the image capture device in the world coordinate system. The present disclosure does not specifically limit this.
By adopting the technical scheme, each image acquisition position can be determined only according to the splicing range information input by the user on the human body image, the preparation process for acquiring the image is simplified, and the image acquisition efficiency is improved.
In the disclosure, after each image capturing position of the image capturing device in the world coordinate system is determined, the image capturing device may be controlled to move to each image capturing position to capture an image. Specifically, the bulb and the detector in the image acquisition device can be controlled to move to each image acquisition position to acquire an image. The bulb and the detector can be moved in one direction, for example, the bulb and the detector are sequentially moved along the direction from the head to the foot of the human body, so that the bulb and the detector are moved to each image acquisition position for image acquisition. It should be noted that, in the process of moving the bulb and the detector, the bulb and the detector need to be kept in a centered state. In addition, in some cases, the bulb can be controlled to rotate by a certain angle without moving the bulb, and in this case, the bulb and the detector are kept in a centering state when the detector is moved.
In addition, due to different tissue structures of different human body parts, the exposure dose is different when the image is acquired. For example, the exposure doses required for the cervical and lumbar vertebrae are very different, and overexposure or underdosage may occur if the exposure dose is inappropriate. The better image effect can be obtained at a lower radiation dose level by setting specific and different image acquisition parameters aiming at different human body parts. Therefore, in the present disclosure, when each frame of image is acquired, the image acquisition parameters corresponding to the frame of image, that is, the image acquisition parameters corresponding to each image acquisition position, are determined. The image acquisition parameters may include tube voltage parameters, tube current parameters, exposure time parameters, and the like.
Specifically, as shown in fig. 6, the image acquiring method provided by the present disclosure may further include step 205.
In step 205, image capturing parameters corresponding to each image capturing position are determined.
In one embodiment, the image acquisition parameters corresponding to each image acquisition position may be determined by using the image acquisition parameters corresponding to each frame of local image set in the APR protocol. For example, as shown in fig. 1, the thoracic-lumbar APR protocol sets three frames of images (assume image a, image B, and image C, respectively) to be acquired, and the corresponding image acquisition parameters are preset for each frame of local image. According to the above-mentioned manner of determining the image capturing position, the capturing position of the image a, the capturing position of the image B, and the capturing position of the image C can be determined. Thus, it can be determined that the image acquisition parameter corresponding to the acquisition position of the image a is the image acquisition parameter corresponding to the image a, the image acquisition parameter corresponding to the acquisition position of the image B is the image acquisition parameter corresponding to the image B, and the image acquisition parameter corresponding to the acquisition position of the image C is the image acquisition parameter corresponding to the image C.
However, since the human body size difference, the designated splicing start position and the splicing end position are different, the human body part corresponding to the image preset in the APR protocol is not completely consistent with the human body part corresponding to the image actually acquired, so that the image acquisition parameter determined according to the APR protocol is not matched with the human body part corresponding to the image actually acquired, the acquired image quality is poor, the human body part cannot be clearly displayed, and the user cannot check the image.
Therefore, in another embodiment, determining the respective image capturing parameters for each image capturing position may further include: firstly, a projection area of a detection area of the image acquisition equipment at each image acquisition position in a human body image is determined.
It should be noted that the detection range of the image capturing device is constant, but the detection area of the image capturing device is different at different positions. Therefore, in the present disclosure, a projection area of the detection area of the image capturing device at each image capturing position in the human body image is determined. For example, as shown in fig. 7, assuming that the determined image capturing positions include position 1, position 2 and position 3, the projection area of the detection area of the image capturing device at position 1 in the human body image is the projection area 1 (shown by a solid-line rectangular box in fig. 7), the projection area of the detection area of the image capturing device at position 2 in the human body image is the projection area 2 (shown by a dotted-line rectangular box in fig. 7), and the projection area of the detection area of the image capturing device at position 3 in the human body image is the projection area 3 (shown by a dotted-line rectangular box in fig. 7).
Then, the human body part included in each projection area is identified. Alternatively, the body part included in each projection region may be identified by a body part identification model. The human body part recognition model is obtained by training a deep residual convolutional neural network.
For example, a human body image may be input into the human body part recognition model to obtain an output image, which is a grayscale image with human body part markers. Such grayscale images may determine the approximate location of the head, neck, shoulders, upper torso center, pelvis, lower extremities, etc. As shown in fig. 7, different human body parts can be characterized with different gray levels. Table 1 shows the corresponding labels for different body parts. Thus, it can be identified that the projection region 1 includes the head and the trunk, the projection region 2 includes the trunk, and the projection region 3 includes the trunk and the femur.
TABLE 1
Parts of human body Marking
Head part
1
Trunk 2
Upper limb 3
Femur 4
Tibiofibula 5
Then, according to the human body part included in each projection region, the anatomical procedure type x-ray photography part (APR part) corresponding to each projection region is determined. The APR portion corresponding to each projection region can be understood as an APR portion corresponding to each image capturing position.
For example, table 2 shows the correspondence between the APR region and the human body region, and the APR region corresponding to each projection region is determined by the correspondence. For example, the APR regions corresponding to projection region 1 are located right and lateral to the cervical spine, the APR regions corresponding to projection region 2 are located right and lateral to the chest, and the APR regions corresponding to projection region 3 are located right and lateral to the lumbar spine.
TABLE 2
APR site Marking
The right and lateral position of the skull 1
The right and lateral positions of cervical vertebrae 1、2
Chest at the right and lateral positions 2
The right and lateral position of lumbar vertebra 2、4
Right and lateral position of femur 4
Right and lateral position of knee joint 4、5
Tibiofibula at the front and lateral positions 5
And finally, determining the image acquisition parameters corresponding to each image acquisition position according to the corresponding relation between the anatomical procedure type X-ray photography part and the image acquisition parameters. Wherein, the correspondence between the anatomical procedural radiography site and the image acquisition parameters may be predetermined using correlation techniques.
For example, table 3 shows the correspondence between the APR region and the image acquisition parameter. By the corresponding relation, the image acquisition parameters corresponding to each APR part can be determined, and further the image acquisition parameters corresponding to each image acquisition position can be determined.
TABLE 3
Figure BDA0002544170460000171
Accordingly, as shown in fig. 6, step 203 in fig. 2 may specifically include step 2031.
In step 2031, for each image capturing position, the image capturing device is controlled to capture an image of the human body at the image capturing position according to the image capturing parameters corresponding to the image capturing position.
By adopting the technical scheme, the anatomy program type X-ray photographing part corresponding to each projection area is determined according to the recognition result of the human body part in the human body image, namely, the anatomy program type X-ray photographing part corresponding to each image acquisition position is determined, so that the actually acquired human body part is consistent with the anatomy program type X-ray photographing part, the accuracy of the image acquisition parameters corresponding to each determined image acquisition position is further ensured, and the quality of the image is improved.
In addition, in order to ensure the accuracy of image acquisition, in the present disclosure, an Automatic Exposure Control (AEC) technique of an image acquisition device may also be used to acquire an image. The AEC ionization chamber field is selected in the location of the body image according to the region of interest (e.g., the lungs). As shown in table 3, the image acquisition parameters may also include AEC feedback parameters. Among other things, AEC implementations rely on physical or virtual ionization chambers. And the ionization chamber is generally composed of three field induction areas of L at the upper left, C at the middle and R at the upper right.
Compared with the prior art, the method has the advantages that reasonable control can be achieved by adopting a virtual ionization chamber, and due to the fact that when the acquisition tasks are divided, the body length of a human body is different, each acquired acquisition task cannot be completely matched with an APR (automatic program response) protocol, the selected feedback area is located upwards or downwards, acquisition is inaccurate, the feedback area can be displayed in a human body image in a human-computer interaction interface, and therefore a user can clearly see whether the used feedback area is appropriate. And, the software mode is not limited to L, R, C three physical feedback areas, so that more reasonable positions can be selected more flexibly. In addition, the user can freely change the position and the size of the ionization chamber through simple dragging operation in the human-computer interaction interface.
Similarly, since different human body parts have different tissue structures, contrast, noise characteristics, enhancement, and the like in X-ray images are also different, and currently, in X-ray image processing, a pyramid based on multi-frequency decomposition is a commonly used technique, and is affected by noise, and the frequencies to be highlighted or suppressed are different for different human body parts, for example, for a structure in which fine frequencies need to be observed for a knee joint, higher coefficients need to be maintained at frequencies 1 and 2, and the coefficients of the frequencies 1 and 2 shown in table three are 0.5 and 0.25, while for a part in which lumbar vertebrae is easily damaged by noise, middle-frequency enhancement is emphasized more, and balance is emphasized more, so that the noise is prevented from being excessively amplified. Therefore, in the present disclosure, it is further required to determine an image processing parameter corresponding to each image capturing position, where the image processing parameter may include at least one of the following: amplification parameters, noise reduction parameters, contrast parameters and image enhancement parameters.
Specifically, as shown in fig. 6, the image acquiring method provided by the present disclosure may further include step 206.
In step 206, image processing parameters corresponding to each image capturing location are determined.
For example, the image processing parameters corresponding to each image capturing position may be determined by referring to the manner of determining the image capturing parameters corresponding to each image capturing position in step 205 and the correspondence relationship between the APR region and the image processing parameters shown in table three. The detailed description of the embodiments of determining the image processing parameters corresponding to each image capturing position is omitted here.
In step 207, for each image capturing position, the image captured at the image capturing position is processed according to the image processing parameters corresponding to the image capturing position.
It should be noted that, in the present disclosure, after an image is acquired at any image acquisition position, the image can be processed according to the image processing parameters corresponding to the image acquisition position. Or after all the images to be acquired are acquired, processing the images acquired at the positions according to the image processing parameters corresponding to the image acquisition positions when each image is acquired. The present disclosure does not specifically limit this.
Accordingly, step 204 in fig. 2 may specifically include step 2041.
In step 2041, the processed images are stitched.
By adopting the technical scheme, the accuracy of the image processing parameters corresponding to each determined image acquisition position is ensured, the images are processed by utilizing the accurate image processing parameters, the quality of the images is improved, and then the spliced images can more clearly display human body parts and are convenient for a user to check.
Based on the same invention concept, the disclosure also provides an image acquisition device. Fig. 8 is a block diagram illustrating an image capturing device according to an exemplary embodiment. As shown in fig. 8, the image capturing apparatus 800 may include:
an obtaining module 801 configured to obtain stitching range information input by a user on a human body image, where the human body image includes a human body part to be subjected to image acquisition, and the stitching range information is used to indicate a range of the human body part to be subjected to image acquisition on the human body image;
a first determining module 802, configured to determine each image capturing position of the image capturing device in the world coordinate system according to the stitching range information;
a first control module 803 configured to control the image capturing device to capture an image of the human body part at each of the image capturing positions;
a stitching module 804 configured to stitch the images of the human body part acquired at each of the image acquisition positions.
Optionally, the image acquisition device comprises an image acquisition device and a detector; the first determining module 802 may include:
the first determining submodule is configured to determine an image acquisition starting position and an image acquisition ending position according to the distance between the image acquisition device and the detector, the current central position of the detector, the transformation relation between an image coordinate system and a world coordinate system and the splicing range information;
the second determining submodule is configured to determine each image acquisition position of the image acquisition equipment in the world coordinate system according to the first preset overlapping area of the two adjacent frames of images in the world coordinate system, the image acquisition starting position and the image acquisition ending position, and the detection area of the detector.
Optionally, the center of the photographing field of view of the image capturing apparatus and the center of the photographing field of view of the detector coincide, and the apparatus may further include:
a second control module configured to control the image acquisition device to acquire a first image if a distance between the image acquisition device and the detector is a first distance, and to control the image acquisition device to acquire a second image if the distance between the image acquisition device and the detector is a second distance, wherein the first distance and the second distance are different;
a second determination module configured to determine a first projection area of a detection area of the detector in the first image and a first number of pixels comprised by the first projection area;
a third determination module configured to determine a second projection area of the detection area of the detector in the second image and a second number of pixels comprised by the second projection area;
a fourth determination module configured to determine a transformation parameter between an image coordinate system and a world coordinate system according to the detection area of the detector, the first number of pixels, the second number of pixels, and the first distance and the second distance;
a fifth determining module configured to determine a transformation relation between the image coordinate system and a world coordinate system according to the transformation parameters between the image coordinate system and the world coordinate system.
Optionally, the image acquisition device comprises a detector; the splicing range information comprises a splicing starting position and a splicing ending position; the first determining module 802 may include:
a third determination submodule configured to determine a third projection area of the detection area of the detector in the human body image, and a third number of pixels included in the third projection area;
a fourth determining submodule configured to determine each image capturing position of the image capturing device in the image coordinate system according to a second preset overlapping area of two adjacent frames of images in the image coordinate system, the stitching start position and the stitching end position, and the third pixel number;
and the fifth determining submodule is configured to determine each image acquisition position of the image acquisition equipment in a world coordinate system according to each image acquisition position of the image acquisition equipment in the image coordinate system.
Optionally, the apparatus may further include:
a sixth determining module configured to determine an image capturing parameter corresponding to each of the image capturing positions;
the first control module 803 is configured to control the image capturing device to capture an image of the human body at each of the image capturing positions according to the image capturing parameters corresponding to the image capturing positions.
Optionally, the sixth determining module may include:
a sixth determining submodule configured to determine a projection area of a detection area of the image capturing device at each image capturing position in the human body image;
the identification submodule is configured to identify the human body part included in each projection area;
a seventh determining submodule configured to determine an anatomical procedural radiography portion corresponding to each of the projection regions according to a human body portion included in each of the projection regions;
and the eighth determining submodule is configured to determine the image acquisition parameters corresponding to each image acquisition position according to the corresponding relation between the anatomical procedure type X-ray photographing part and the image acquisition parameters.
Optionally, the identification submodule is configured to identify, through a human body part identification model, a human body part included in each of the projection regions.
Optionally, the apparatus may further include:
a seventh determining module configured to determine an image processing parameter corresponding to each of the image capturing positions, wherein the image processing parameter includes at least one of: amplifying parameters, noise reduction parameters, contrast parameters and image enhancement parameters;
the image processing module is configured to process the images acquired at the image acquisition positions according to the image processing parameters corresponding to the image acquisition positions aiming at each image acquisition position;
and the splicing module is configured to splice the processed images.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 9 is a block diagram illustrating an electronic device in accordance with an example embodiment. As shown in fig. 9, the electronic device 900 may be an image capturing device, which may include: a processor 901 and a memory 902. The electronic device 900 may also include one or more of a multimedia component 903, an input/output (I/O) interface 904, and a communications component 905.
The processor 901 is configured to control the overall operation of the electronic device 900, so as to complete all or part of the steps in the image capturing method. The memory 902 is used to store various types of data to support operation of the electronic device 900, such as instructions for any application or method operating on the electronic device 900 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 902 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia component 903 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 902 or transmitted through the communication component 905. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 904 provides an interface between the processor 901 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 905 is used for wired or wireless communication between the electronic device 900 and other devices. Wireless communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, 4G, NB-IOT, eMTC, or other 5G, etc., or a combination of one or more of them, which is not limited herein. The corresponding communication component 905 may thus include: Wi-Fi module, Bluetooth module, NFC module, etc.
In an exemplary embodiment, the electronic Device 900 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors or other electronic components for executing the image capturing method.
In another exemplary embodiment, a computer readable storage medium including program instructions for implementing the steps of the image acquisition method described above when executed by a processor is also provided. For example, the computer readable storage medium may be the memory 902 including the program instructions, which can be executed by the processor 901 of the electronic device 900 to implement the image capturing method.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the image acquisition method described above when executed by the programmable apparatus.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that the various features described in the above embodiments may be combined in any suitable manner without departing from the scope of the invention. In order to avoid unnecessary repetition, various possible combinations will not be separately described in this disclosure.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (11)

1. An image acquisition method, comprising:
acquiring splicing range information input by a user on a human body image, wherein the human body image comprises a human body part to be subjected to image acquisition, and the splicing range information is used for indicating the range of the human body part to be subjected to image acquisition on the human body image;
determining each image acquisition position of the image acquisition equipment in a world coordinate system according to the splicing range information;
controlling the image acquisition equipment to respectively acquire images of the human body part at each image acquisition position;
and splicing the images of the human body part acquired at each image acquisition position.
2. The method of claim 1, wherein the image acquisition device comprises an image acquisition device and a detector; determining each image acquisition position of the image acquisition equipment under a world coordinate system according to the splicing range information, wherein the image acquisition position comprises the following steps:
determining an image acquisition starting position and an image acquisition ending position according to the distance between the image acquisition device and the detector, the current central position of the detector, the transformation relation between an image coordinate system and a world coordinate system and the splicing range information;
and determining each image acquisition position of the image acquisition equipment in the world coordinate system according to a first preset overlapping area of two adjacent frames of images in the world coordinate system, the image acquisition starting position, the image acquisition ending position and the detection area of the detector.
3. The method according to claim 2, wherein the center of the photographing field of view of the image acquisition device and the center of the photographing field of view of the detector coincide, and the transformation relationship between the image coordinate system and the world coordinate system is determined by:
controlling the image acquisition device to acquire a first image under the condition that the distance between the image acquisition device and the detector is a first distance, and controlling the image acquisition device to acquire a second image under the condition that the distance between the image acquisition device and the detector is a second distance, wherein the first distance and the second distance are different;
determining a first projection area of a detection area of the detector in the first image and a first number of pixels comprised by the first projection area;
determining a second projection area of the detection area of the detector in the second image and a second number of pixels comprised by the second projection area;
determining a transformation parameter between an image coordinate system and a world coordinate system according to the detection area of the detector, the first pixel quantity, the second pixel quantity, the first distance and the second distance;
and determining the transformation relation between the image coordinate system and the world coordinate system according to the transformation parameters between the image coordinate system and the world coordinate system.
4. The method of claim 1, wherein the image acquisition device comprises a detector; the splicing range information comprises a splicing starting position and a splicing ending position;
determining each image acquisition position of the image acquisition equipment under a world coordinate system according to the splicing range information, wherein the image acquisition position comprises the following steps:
determining a third projection area of the detection area of the detector in the human body image and a third pixel number included in the third projection area;
determining each image acquisition position of the image acquisition equipment in an image coordinate system according to a second preset overlapping area, the splicing starting position, the splicing ending position and the third pixel number of two adjacent frames of images in the image coordinate system;
and determining each image acquisition position of the image acquisition equipment in a world coordinate system according to each image acquisition position of the image acquisition equipment in the image coordinate system.
5. The method of claim 1, further comprising:
determining image acquisition parameters corresponding to each image acquisition position;
the control the image acquisition equipment is in each image acquisition position department respectively to the human body position carries out image acquisition, includes:
and controlling the image acquisition equipment to be at the image acquisition position according to each image acquisition position, and acquiring images of the human body part according to image acquisition parameters corresponding to the image acquisition position.
6. The method of claim 5, wherein said determining respective image acquisition parameters for each of said image acquisition locations comprises:
determining a projection area of a detection area of the image acquisition equipment at each image acquisition position in the human body image;
identifying the human body part included in each projection area;
determining an anatomy procedure type X-ray photographing part corresponding to each projection area according to the human body part included in each projection area;
and determining the image acquisition parameters corresponding to each image acquisition position according to the corresponding relation between the anatomical procedure type X-ray photography part and the image acquisition parameters.
7. The method of claim 6, wherein the identifying the respective body part included in each of the projection regions comprises:
and identifying the human body part included in each projection area through a human body part identification model.
8. The method of claim 1, further comprising:
determining an image processing parameter corresponding to each image acquisition position, wherein the image processing parameter includes at least one of the following: amplifying parameters, noise reduction parameters, contrast parameters and image enhancement parameters;
processing the images collected at the image collecting positions according to the image processing parameters corresponding to the image collecting positions aiming at each image collecting position;
the stitching of the images of the human body part collected at each of the image collecting positions includes:
and splicing the images obtained after the processing.
9. An image capturing apparatus, comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is configured to acquire splicing range information input by a user on a human body image, the human body image comprises a human body part to be subjected to image acquisition, and the splicing range information is used for indicating the range of the human body part to be subjected to image acquisition on the human body image;
the first determining module is configured to determine each image acquisition position of the image acquisition equipment in a world coordinate system according to the splicing range information;
the first control module is configured to control the image acquisition equipment to acquire images of the human body part at each image acquisition position respectively;
a stitching module configured to stitch the images of the human body part acquired at each of the image acquisition positions.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
11. An image acquisition apparatus, comprising:
a memory having a computer program stored thereon;
a processor for executing the computer program in the memory to carry out the steps of the method of any one of claims 1 to 8.
CN202010555725.2A 2020-06-17 2020-06-17 Image acquisition method and device, readable storage medium and image acquisition equipment Pending CN111815514A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010555725.2A CN111815514A (en) 2020-06-17 2020-06-17 Image acquisition method and device, readable storage medium and image acquisition equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010555725.2A CN111815514A (en) 2020-06-17 2020-06-17 Image acquisition method and device, readable storage medium and image acquisition equipment

Publications (1)

Publication Number Publication Date
CN111815514A true CN111815514A (en) 2020-10-23

Family

ID=72845947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010555725.2A Pending CN111815514A (en) 2020-06-17 2020-06-17 Image acquisition method and device, readable storage medium and image acquisition equipment

Country Status (1)

Country Link
CN (1) CN111815514A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101112315A (en) * 2007-08-24 2008-01-30 珠海友通科技有限公司 X-ray human body clairvoyance image automatic anastomosing and splicing method
CN104414660A (en) * 2013-08-29 2015-03-18 深圳市蓝韵实业有限公司 DR image obtaining and splicing method and system
CN106023078A (en) * 2016-05-18 2016-10-12 南京普爱医疗设备股份有限公司 DR image splicing method
US20190197713A1 (en) * 2017-12-27 2019-06-27 Interdigital Ce Patent Holdings Method and apparatus for depth-map estimation
CN110507338A (en) * 2019-08-30 2019-11-29 东软医疗系统股份有限公司 Localization method, device, equipment and Digital X-ray Radiotive system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101112315A (en) * 2007-08-24 2008-01-30 珠海友通科技有限公司 X-ray human body clairvoyance image automatic anastomosing and splicing method
CN104414660A (en) * 2013-08-29 2015-03-18 深圳市蓝韵实业有限公司 DR image obtaining and splicing method and system
CN106023078A (en) * 2016-05-18 2016-10-12 南京普爱医疗设备股份有限公司 DR image splicing method
US20190197713A1 (en) * 2017-12-27 2019-06-27 Interdigital Ce Patent Holdings Method and apparatus for depth-map estimation
CN110507338A (en) * 2019-08-30 2019-11-29 东软医疗系统股份有限公司 Localization method, device, equipment and Digital X-ray Radiotive system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘国华: "《HALCON数字图像处理》", 西安电子科技大学出版社, pages: 288 *

Similar Documents

Publication Publication Date Title
CN107789001B (en) Positioning method and system for imaging scanning
US20200268339A1 (en) System and method for patient positioning
US7522701B2 (en) System and method for image composition using position sensors
EP3453330B1 (en) Virtual positioning image for use in imaging
US20200037977A1 (en) Automated apparatus to improve image quality in x-ray and associated method of use
JP4484462B2 (en) Method and apparatus for positioning a patient in a medical diagnostic or therapeutic device
US10918346B2 (en) Virtual positioning image for use in imaging
KR20150112830A (en) Positioning unit for positioning a patient, imaging device and method for the optical generation of a positioning aid
CN110742631B (en) Imaging method and device for medical image
JP2012050515A (en) Image processing apparatus and method
US11051778B2 (en) X-ray fluoroscopic imaging apparatus
JP6970203B2 (en) Computed tomography and positioning of anatomical structures to be imaged
US11564651B2 (en) Method and systems for anatomy/view classification in x-ray imaging
CN112450956A (en) Automatic positioning method, device, readable storage medium, electronic equipment and system
CN107049346B (en) Medical imaging control method, medical imaging control device and medical imaging equipment
US10102638B2 (en) Device and method for image registration, and a nontransitory recording medium
JP2004363850A (en) Inspection device
EP3370616B1 (en) Device for imaging an object
KR101577564B1 (en) X-ray Systems and Methods with Medical Diagnostic Ruler.
CN111815514A (en) Image acquisition method and device, readable storage medium and image acquisition equipment
CN115474951A (en) Method for controlling a medical imaging examination of an object, medical imaging system and computer-readable data storage medium
CN114067994A (en) Target part orientation marking method and system
JP2022035719A (en) Photographing error determination support device and program
CN113017852B (en) Interactive experience method, equipment and electronic device for medical imaging scanning process
JP2020509890A5 (en)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination