WO2016095798A1 - 一种定位三维ct图像中的目标的方法和安检系统 - Google Patents

一种定位三维ct图像中的目标的方法和安检系统 Download PDF

Info

Publication number
WO2016095798A1
WO2016095798A1 PCT/CN2015/097378 CN2015097378W WO2016095798A1 WO 2016095798 A1 WO2016095798 A1 WO 2016095798A1 CN 2015097378 W CN2015097378 W CN 2015097378W WO 2016095798 A1 WO2016095798 A1 WO 2016095798A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional
depth map
viewing angle
image
data
Prior art date
Application number
PCT/CN2015/097378
Other languages
English (en)
French (fr)
Inventor
陈志强
张丽
王朔
孙运达
黄清萍
唐智
Original Assignee
同方威视技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201410795840.1A external-priority patent/CN105784731B/zh
Application filed by 同方威视技术股份有限公司 filed Critical 同方威视技术股份有限公司
Priority to US15/300,668 priority Critical patent/US10145977B2/en
Priority to EP15869305.1A priority patent/EP3112852A4/en
Publication of WO2016095798A1 publication Critical patent/WO2016095798A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V5/00Prospecting or detecting by the use of ionising radiation, e.g. of natural or induced radioactivity
    • G01V5/20Detecting prohibited goods, e.g. weapons, explosives, hazardous substances, contraband or smuggled objects
    • G01V5/22Active interrogation, i.e. by irradiating objects or goods using external radiation sources, e.g. using gamma rays or cosmic rays
    • G01V5/226Active interrogation, i.e. by irradiating objects or goods using external radiation sources, e.g. using gamma rays or cosmic rays using tomography
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/04Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
    • G01N23/046Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material using tomography, e.g. computed tomography [CT]
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V5/00Prospecting or detecting by the use of ionising radiation, e.g. of natural or induced radioactivity
    • G01V5/20Detecting prohibited goods, e.g. weapons, explosives, hazardous substances, contraband or smuggled objects
    • G01V5/22Active interrogation, i.e. by irradiating objects or goods using external radiation sources, e.g. using gamma rays or cosmic rays
    • G01V5/228Active interrogation, i.e. by irradiating objects or goods using external radiation sources, e.g. using gamma rays or cosmic rays using stereoscopic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/008Specific post-processing after tomographic reconstruction, e.g. voxelisation, metal artifact correction
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2223/00Investigating materials by wave or particle radiation
    • G01N2223/40Imaging
    • G01N2223/401Imaging image processing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2223/00Investigating materials by wave or particle radiation
    • G01N2223/40Imaging
    • G01N2223/419Imaging computed tomograph
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2223/00Investigating materials by wave or particle radiation
    • G01N2223/60Specific applications or type of materials
    • G01N2223/643Specific applications or type of materials object on conveyor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V5/00Prospecting or detecting by the use of ionising radiation, e.g. of natural or induced radioactivity
    • G01V5/20Detecting prohibited goods, e.g. weapons, explosives, hazardous substances, contraband or smuggled objects
    • G01V5/22Active interrogation, i.e. by irradiating objects or goods using external radiation sources, e.g. using gamma rays or cosmic rays
    • G01V5/224Multiple energy techniques using one type of radiation, e.g. X-rays of different energies

Definitions

  • the present application relates to security inspection, and in particular to a method and security system for locating a target in a computerized Tomographic image.
  • the multi-energy X-ray safety inspection system is a new security inspection system developed on the basis of a single-energy X-ray safety inspection system. It not only provides the shape and content of the object to be inspected, but also provides information reflecting the effective atomic number of the object to be inspected, thereby distinguishing whether the object is organic or inorganic, and displaying it on a color monitor with different colors to help operate Personnel make judgments.
  • the judge In the process of making a judgment, if a suspect is found, the judge is required to mark it with an input device such as a mouse.
  • an input device such as a mouse.
  • DR Digital Radiographic
  • the principle of marking the suspect directly on the two-dimensional DR image is simple and mature.
  • CT security system how to quickly and accurately mark the suspect on the three-dimensional image of CT is an urgent problem to be solved.
  • the present disclosure proposes a method and security system for locating a target in a three-dimensional CT image, which can facilitate a user to quickly mark a suspect in a CT image.
  • a method of locating an object in a three-dimensional CT image comprising the steps of: displaying a three-dimensional CT image; receiving a user selection of at least one region of the three-dimensional CT image at a first viewing angle Generating a first three-dimensional description; receiving a selection of at least one region of the three-dimensional CT image by the user at a second viewing angle to generate a second three-dimensional description, wherein an angle between the first viewing angle and the second viewing angle is within a predetermined range
  • the first three-dimensional description and the second three-dimensional description are related to at least one of a size, a location, or a physical attribute of the target at the corresponding perspective; and determining based on the first three-dimensional description and the second three-dimensional description The target in the three-dimensional CT image.
  • the step of generating the first three-dimensional description comprises:
  • the step of generating a second three-dimensional description includes:
  • the step of determining a target in the three-dimensional CT image comprises:
  • the first bounding box/data subset and the second bounding box/data subset are intersected in the image space to determine the target.
  • the steps of obtaining the forward plane depth map and the back plane depth map include:
  • the first bounding box/data subset and the second bounding box/data subset are both bounding boxes/data subsets in any direction.
  • the marked regions of the three-dimensional space are fused and displayed in the CT data.
  • the predetermined range is specifically 45 degrees to 135 degrees.
  • a security CT system comprising: a CT scanning device that obtains inspection data of the object to be inspected; a memory that stores the inspection data; and a display device that displays the object to be inspected a three-dimensional CT image; an input device, inputting a user's selection of at least one region of the three-dimensional CT image at a first viewing angle, and inputting a user's selection of at least one region of the three-dimensional CT image at a second viewing angle, Wherein the angle between the first view and the second view is within a predetermined range; and the data processor generates a first three-dimensional description according to the selection at the first view, and generates a second three-dimensional description according to the selection at the second view, Determining an object in the three-dimensional CT image based on the first three-dimensional description and the second three-dimensional description, wherein the first three-dimensional description and the second three-dimensional description and a size, a position, or a physics of a target at a corresponding perspective
  • the data processor is configured to:
  • the first bounding box/data subset and the second bounding box/data subset are intersected in the image space to determine the target.
  • the data processor is configured to:
  • the user can quickly mark the suspect in the CT image.
  • FIG. 1 is a block diagram showing the structure of a security CT system according to an embodiment of the present disclosure
  • Figure 2 is a block diagram showing the structure of a computer data processor as shown in Figure 1;
  • FIG. 3 is a block diagram showing the structure of a controller according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flow chart describing a method of marking a target according to an embodiment of the present disclosure.
  • references to "one embodiment”, “an embodiment”, “an” or “an” or “an” or “an” or “an” In at least one embodiment.
  • the appearances of the phrase “in one embodiment”, “in the embodiment”, “the” Furthermore, the particular features, structures, or characteristics may be combined in one or more embodiments or examples in any suitable combination and/or sub-combination.
  • the term “and/or” as used herein includes any and all combinations of one or more of the associated listed items.
  • embodiments of the present disclosure provide a method of locating a target in a three-dimensional CT image.
  • a three-dimensional CT image of the object to be inspected is displayed on the display.
  • the receiving user generates a first three-dimensional description of the selection of at least one region of the three-dimensional CT image at a first viewing angle, such as by an input device 65, such as a mouse.
  • the receiving user selects at least one region of the three-dimensional CT image at a second viewing angle to generate a second three-dimensional description.
  • the angle between the first perspective and the second perspective is within a predetermined range and the first three-dimensional description and the second three-dimensional description are related to at least one of a size, a location, or a physical property of the target at the corresponding perspective.
  • the target in the three-dimensional CT image is determined based on the first three-dimensional description and the second three-dimensional description.
  • FIG. 1 is a schematic structural view of a CT system according to an embodiment of the present disclosure.
  • the CT apparatus includes a chassis 20, a carrier mechanism 40, a controller 50, a computer data processor 60, and the like.
  • the gantry 20 includes a source 10 that emits X-rays for inspection, such as an X-ray machine, and a detection and acquisition device 30.
  • the carrier mechanism 40 carries the scanned area between the source 10 of the inspected baggage 20 passing through the frame 20 and the detecting and collecting device 30, while the frame 20 is rotated about the direction of advancement of the inspected baggage 70, thereby being emitted by the source 10
  • the rays can pass through the inspected baggage 70 to perform a CT scan of the inspected baggage 70.
  • the detecting and collecting device 30 is, for example, a detector having an integral module structure and a data collector, such as a flat panel detector, for detecting rays transmitted through the object to be inspected, obtaining an analog signal, and converting the analog signal into a digital signal, thereby outputting the output.
  • the projection data of the luggage 70 for X-rays is checked.
  • the controller 50 is used to control the various parts of the entire system to work synchronously.
  • the computer data processor 60 is used to process the data collected by the data collector, process and reconstruct the data, and output the results.
  • the radiation source 10 is placed on the side where the object to be inspected can be placed, and the detecting and collecting device 30 is placed on the other side of the checked baggage 70, including a detector and a data collector for acquiring the checked baggage.
  • the data collector includes a data amplification forming circuit that can operate in a (current) integration mode or a pulse (count) mode.
  • the data output cable of the detection and acquisition device 30 is coupled to the controller 50 and computer data processor 60 for storing the acquired data in computer data processor 60 in accordance with a trigger command.
  • FIG. 2 shows a block diagram of the computer data processor 60 shown in FIG. 1.
  • the data collected by the data collector is stored in the memory 61 via the interface unit 68 and the bus 64.
  • Configuration information and a program of the computer data processor are stored in a read only memory (ROM) 62.
  • a random access memory (RAM) 63 is used to temporarily store various data during the operation of the processor 66.
  • a computer program for performing data processing is also stored in the memory 61.
  • the internal bus 64 is connected to the above-described memory 61, read only memory 62, random access memory 63, input device 65, processor 66, display device 67, and interface unit 68.
  • the instruction code of the computer program instructs the processor 66 to execute a predetermined data processing algorithm, and after obtaining the data processing result, display it on, for example, an LCD display.
  • the processing result is outputted on the display device 67 of the class, or directly in the form of a hard copy such as printing.
  • FIG. 3 shows a structural block diagram of a controller in accordance with an embodiment of the present disclosure.
  • the controller 50 includes a control unit 51 that controls the radiation source 10, the carrier mechanism 40, and the detecting and collecting device 30 according to an instruction from the computer 60, and a trigger signal generating unit 52 for the control unit.
  • a trigger command for triggering the action of the radiation source 10, the detecting and collecting device 30, and the carrying mechanism 40 is generated under control;
  • the first driving device 53 is triggered by the trigger signal generating unit 52 under the control of the control unit 51.
  • the drive carrier 40 transmits the checked baggage 70;
  • the second drive device 54 rotates according to the trigger command frame 20 generated by the trigger signal generating unit 52 under the control of the control unit 51.
  • the projection data obtained by the detecting and collecting device 30 is stored in the computer 60 for CT tomographic image reconstruction, thereby obtaining tomographic image data of the checked baggage 70.
  • the computer 60 then obtains a DR image of at least one viewing angle of the checked baggage 70 from the tomographic image data, for example by executing software, and displays it along with the reconstructed three-dimensional image to facilitate the security check by the panelist.
  • the CT imaging system described above may also be a dual energy CT system, that is, the X-ray source 10 of the gantry 20 is capable of emitting both high-energy and low-energy ray, detection and acquisition devices 30.
  • the dual-energy CT reconstruction is performed by the computer data processor 60 to obtain the equivalent atomic number and electron density data of the respective faults of the checked baggage 70.
  • FIG. 4 is a schematic flow chart describing a method of marking a target object according to an embodiment of the present disclosure.
  • step S401 the inspection data of the object to be inspected is read, and a three-dimensional CT image is displayed on the display screen.
  • step S402 the receiving user generates a first three-dimensional description by selecting an at least one region of the three-dimensional CT image at a first viewing angle, for example, by an input device 65 such as a mouse.
  • an input device 65 such as a mouse.
  • the user operates the input device 65 to check or circle an area in the image displayed on the screen at the current viewing angle.
  • step S403 the user is selected to select at least one region of the three-dimensional CT image at the second viewing angle to generate a second three-dimensional description.
  • the user operates the input device 65 to check or circle an area in an image displayed on the screen from another viewing angle.
  • the angle between the first perspective and the second perspective is within a predetermined range and the first three-dimensional description and the second three-dimensional description are related to at least one of a size, a location, or a physical property of the target at the corresponding perspective.
  • the angle between the two viewing angles is between 45 degrees and 135 degrees.
  • a target in the three-dimensional CT image is determined based on the first three-dimensional description and the second three-dimensional description.
  • obtaining a forward plane depth map and a back plane depth map according to the first viewing angle and respectively searching in the forward plane depth map and the back plane depth map according to the selected region of the user at the first viewing angle, generating the first A bounding box/data subset as a first three-dimensional description.
  • Obtaining a three-dimensional rendering result in a second viewing angle by using the generated first bounding box/data subset as a drawing range, and obtaining a forward plane depth map and a back plane depth map in a second viewing angle according to the user in the second viewing angle
  • the selected regions are retrieved in the forward plane depth map and the back plane depth map, respectively, to generate a second bounding box/data subset as the second three dimensional description.
  • the first bounding box/data subset and the second bounding box/data subset are intersected in the image space to determine the target.
  • the steps of obtaining the forward plane depth map and the back plane depth map include: performing a depth test during the rendering of the scene, recording a minimum depth value, and obtaining a forward plane depth map; Depth test, record the maximum depth value, and get the back surface depth map.
  • the first bounding box/data subset and the second bounding box/data subset are all bounding boxes/data subsets in any direction.
  • the marked regions of the three-dimensional space may be fused to be displayed in the CT data.
  • transparent region culling is first performed to quickly obtain a tight hierarchical bounding box/data subset of non-transparent regions in the data, and then the generated hierarchical bounding box/data subset is rendered to obtain positive and negative faces. Depth map, this is the incident and exit position of the adjusted projected light.
  • the first pick is performed in the current line of sight direction, and the marker points are respectively retrieved in the forward and back plane depth maps to generate a bounding box such as an OBB bounding box.
  • the projection range of the light is updated, and the user performs the second picking under the orthogonal viewing angle that is automatically rotated to generate a new OBB bounding box.
  • the OBB bounding box obtained in the first two steps is subjected to a Boolean intersection operation in the image space to obtain the final marked area. Finally, the suspected region is fused and displayed in the original data using a space-constrained transfer function.
  • aspects of the embodiments disclosed herein may be implemented in an integrated circuit as a whole or in part, as one or more of one or more computers running on one or more computers.
  • a computer program eg, implemented as one or more programs running on one or more computer systems
  • implemented as one or more programs running on one or more processors eg, implemented as one or One or more programs running on a plurality of microprocessors, implemented as firmware, or substantially in any combination of the above, and those skilled in the art, in accordance with the present disclosure, will be provided with design circuitry and/or write software and / or firmware code capabilities.
  • signal bearing media include, but are not limited to, recordable media such as floppy disks, hard drives, compact disks (CDs), digital versatile disks (DVDs), digital tapes, computer memories, and the like; and transmission-type media such as digital and / or analog communication media (eg, fiber optic cable, waveguide, wired communication link, wireless communication link, etc.).

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Geophysics (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • High Energy & Nuclear Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pulmonology (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Biochemistry (AREA)
  • Analytical Chemistry (AREA)
  • Chemical & Material Sciences (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

一种定位三维CT图像中的目标的方法和安检系统。该方法包括:显示一幅三维CT图像;接收用户在第一视角下对所述三维CT图像的至少一个区域的选择,产生第一三维描述;接收用户在第二视角下对所述三维CT图像的至少一个区域的选择,产生第二三维描述,其中第一视角和第二视角之间的角度在预定的范围之中并且所述第一三维描述和所述第二三维描述与相应视角下目标的尺寸、位置或者物理属性中的至少之一相关;以及基于所述第一三维描述和所述第二三维描述确定所述三维CT图像中的目标。利用上述方案,能够方便用户迅速标记CT图像中的嫌疑物。

Description

一种定位三维CT图像中的目标的方法和安检系统 技术领域
本申请涉及安全检查,具体涉及一种定位三维CT(Computerized Tomographic)图像中的目标的方法和安检系统。
背景技术
多能量X射线安全检查系统,是在单能量X射线安全检查系统的基础上开发的新型安检系统。它不仅能提供被检物的形状和内容,还能提供反映被检物品有效原子序数的信息,从而区分被检物是有机物还是无机物,并用不同的颜色在彩色监视器上显示出来,帮助操作人员进行判别。
在判图过程中,如果发现有嫌疑物,需要判图人员用鼠标之类的输入装置将其标注出来。对于DR(Digital Radiographic)类的物品机系统,直接在二维的DR图像上标记嫌疑物原理简单且有成熟的方案。对于CT类的安检系统,如何在CT的三维图像上快速、准确地标记嫌疑物,是一个亟待解决的问题。
发明内容
考虑到现有技术中的一个或者多个技术问题,本公开提出了一种定位三维CT图像中的目标的方法和安检系统,能够方便用户迅速标记CT图像中的嫌疑物。
在本公开的一个方面,提出了一种定位三维CT图像中的目标的方法,包括步骤:显示一幅三维CT图像;接收用户在第一视角下对所述三维CT图像的至少一个区域的选择,产生第一三维描述;接收用户在第二视角下对所述三维CT图像的至少一个区域的选择,产生第二三维描述,其中第一视角和第二视角之间的角度在预定的范围之中并且所述第一三维描述和所述第二三维描述与相应视角下目标的尺寸、位置或者物理属性中的至少之一相关;以及基于所述第一三维描述和所述第二三维描述确定所述三维CT图像中的目标。
根据一些实施例,所述产生第一三维描述的步骤包括:
获得第一视角下的正向面深度图和背向面深度图,根据用户在第一视角下所选择的区域在正向面深度图和背向面深度图中分别进行检索,生成第一包围盒/数据子集,作为所述第一三维描述;
其中,产生第二三维描述的步骤包括:
用生成的第一包围盒/数据子集作为绘制范围获得第二视角下的三维绘制结果,并且获得第二视角下的正向面深度图和背向面深度图,根据用户在第二视角下所选择的区域在正向面深度图和背向面深度图中分别进行检索,生成第二包围盒/数据子集,作为所述第二三维描述;
其中,确定三维CT图像中的目标的步骤包括:
在图像空间对第一包围盒/数据子集和第二包围盒/数据子集进行相交运算,来确定所述目标。
根据一些实施例,获得正向面深度图和背向面深度图的步骤包括:
渲染场景过程中进行深度测试,记录最小的深度值,得到正向面深度图;
渲染场景过程中进行深度测试,记录最大的深度值,得到背向面深度图。
根据一些实施例,所述第一包围盒/数据子集和第二包围盒/数据子集均为任意方向包围盒/数据子集。
根据一些实施例,将三维空间的标记区域融合显示在CT数据中。
根据一些实施例,所述预定范围具体为45度到135度。
在本公开的另一方面,提出了一种安检CT系统,包括:CT扫描设备,获得所述被检查物体的检查数据;存储器,存储所述检查数据;显示设备,显示所述被检查物体的一幅三维CT图像;输入装置,输入用户在第一视角下对所述三维CT图像的至少一个区域的选择,以及输入用户在第二视角下对所述三维CT图像的至少一个区域的选择,其中第一视角和第二视角之间的角度在预定的范围之中;以及数据处理器,根据第一视角下的选择产生第一三维描述,根据第二视角下的选择产生第二三维描述,并基于所述第一三维描述和所述第二三维描述确定所述三维CT图像中的目标,其中所述第一三维描述和所述第二三维描述与相应视角下目标的尺寸、位置或者物理属性中的至少之一相关。
根据一些实施例,所述数据处理器配置用于:
获得第一视角下的正向面深度图和背向面深度图,根据用户在第一视角下所选择的区域在正向面深度图和背向面深度图中分别进行检索,生成第一包围盒/数据子集,作为所述第一三维描述;
用生成的第一包围盒/数据子集作为绘制范围获得第二视角下的三维绘制结果,并且获得第二视角下的正向面深度图和背向面深度图,根据用户在第二视角下所选择的区域在正向面深度图和背向面深度图中分别进行检索,生成第二包围盒/数据子集,作为所述第二三维描述;以及
在图像空间对第一包围盒/数据子集和第二包围盒/数据子集进行相交运算,来确定所述目标。
根据一些实施例,所述数据处理器配置用于:
渲染场景过程中进行深度测试,记录最小的深度值,得到正向面深度图;
渲染场景过程中进行深度测试,记录最大的深度值,得到背向面深度图。
利用上述的技术方案,能够方便用户迅速标记CT图像中的嫌疑物。
附图说明
为了更好地理解本公开,将根据以下附图对本公开进行详细描述:
图1示出了根据本公开实施例的安检CT系统的结构示意图;
图2示出了如图1所示的计算机数据处理器的结构框图;
图3示出了根据本公开实施方式的控制器的结构框图;
图4是描述根据本公开一个实施例的标记目标方法的示意性流程图。
具体实施方式
下面将详细描述本公开的具体实施例,应当注意,这里描述的实施例只用于举例说明,并不用于限制本公开。在以下描述中,为了提供对本公开的透彻理解,阐述了大量特定细节。然而,对于本领域普通技术人员显而易见的是:不必采用这些特定细节来实行本公开。在其他实例中,为了避免混淆本公开,未具体描述公知的结构、材料或方法。
在整个说明书中,对“一个实施例”、“实施例”、“一个示例”或“示例”的提及意味着:结合该实施例或示例描述的特定特征、结构或特性被包含在本公开至少一个实施例中。因此,在整个说明书的各个地方出现的短语“在一个实施例中”、“在实施例中”、“一个示例”或“示例”不一定都指同一实施例或示例。此外,可以以任何适当的组合和/或子组合将特定的特征、结构或特性组合在一个或多个实施例或示例中。此外,本领域普通技术人员应当理解,这里使用的术语“和/或”包括一个或多个相关列出的项目的任何和所有组合。
针对现有技术不能快速标记目标物体的,本公开的实施例提供了提出了一种定位三维CT图像中的目标的方法。首先,在显示器上显示被检查物体一幅三维CT图像。接下来,接收用户例如通过诸如鼠标之类的输入装置65在第一视角下对所述三维CT图像的至少一个区域的选择,产生第一三维描述。进而,接收用户在第二视角下对所述三维CT图像的至少一个区域的选择,产生第二三维描述。第一视角和第二视角之间的角度在预定的范围之中并且所述第一三维描述和所述第二三维描述与相应视角下目标的尺寸、位置或者物理属性中的至少之一相关。最后,基于所述第一三维描述和所述第二三维描述确定所述三维CT图像中的目标。利用上述方案,能够方便用户迅速标记CT图像中的嫌疑物。
图1是根据本公开实施方式的CT系统的结构示意图。如图1所示,根据本实施方式的CT设备包括:机架20、承载机构40、控制器50、计算机数据处理器60等。机架20包括发出检查用X射线的射线源10,诸如X光机,以及探测和采集装置30。承载机构40承载被检查行李70穿过机架20的射线源10与探测和采集装置30之间的扫描区域,同时机架20围绕被检查行李70的前进方向转动,从而由射线源10发出的射线能够透过被检查行李70,对被检查行李70进行CT扫描。
探测和采集装置30例如是具有整体模块结构的探测器及数据采集器,例如平板探测器,用于探测透射被检物品的射线,获得模拟信号,并且将模拟信号转换成数字信号,从而输出被检查行李70针对X射线的投影数据。控制器50用于控制整个系统的各个部分同步工作。计算机数据处理器60用来处理由数据采集器采集的数据,对数据进行处理并重建,输出结果。
如图1所示,射线源10置于可放置被检物体的一侧,探测和采集装置30置于被检查行李70的另一侧,包括探测器和数据采集器,用于获取被检查行李70的多角度投影数据。数据采集器中包括数据放大成形电路,它可工作于(电流)积分方式或脉冲(计数)方式。探测和采集装置30的数据输出电缆与控制器50和计算机数据处理器60连接,根据触发命令将采集的数据存储在计算机数据处理器60中。
图2示出了如图1所示的计算机数据处理器60的结构框图。如图2所示,数据采集器所采集的数据通过接口单元68和总线64存储在存储器61中。只读存储器(ROM)62中存储有计算机数据处理器的配置信息以及程序。随机存取存储器(RAM)63用于在处理器66工作过程中暂存各种数据。另外,存储器61中还存储有用于进行数据处理的计算机程序。内部总线64连接上述的存储器61、只读存储器62、随机存取存储器63、输入装置65、处理器66、显示装置67和接口单元68。
在用户通过诸如键盘和鼠标之类的输入装置65输入的操作命令后,计算机程序的指令代码命令处理器66执行预定的数据处理算法,在得到数据处理结果之后,将其显示在诸如LCD显示器之类的显示装置67上,或者直接以诸如打印之类硬拷贝的形式输出处理结果。
图3示出了根据本公开实施方式的控制器的结构框图。如图3所示,控制器50包括:控制单元51,根据来自计算机60的指令,来控制射线源10、承载机构40和探测和采集装置30;触发信号产生单元52,用于在控制单元的控制下产生用来触发射线源10、探测和采集装置30以及承载机构40的动作的触发命令;第一驱动设备53,它在根据触发信号产生单元52在控制单元51的控制下产生的触发命令驱动承载机构40传送被检查行李70;第二驱动设备54,它根据触发信号产生单元52在控制单元51的控制下产生的触发命令机架20旋转。探测和采集装置30获得的投影数据存储在计算机60中进行CT断层图像重建,从而获得被检查行李70的断层图像数据。然后计算机60例如通过执行软件来从断层图像数据得到被检查行李70的至少一个视角下的DR图像,与重建的三维图像一起显示,方便判图员进行安全检查。根据其他实施例,上述的CT成像系统也可以是双能CT系统,也就是机架20的X射线源10能够发出高能和低能两种射线,探测和采集装置30 探测到不同能量水平下的投影数据后,由计算机数据处理器60进行双能CT重建,得到被检查行李70的各个断层的等效原子序数和电子密度数据。
图4是描述根据本公开一个实施例的标记目标物体的方法的示意性流程图。
如图4所示,在步骤S401,读取被检查物体的检查数据,在显示屏幕上显示一个三维CT图像。
在步骤S402,接收用户例如通过诸如鼠标之类的输入装置65在第一视角下对所述三维CT图像的至少一个区域的选择,产生第一三维描述。例如用户操作输入装置65在当前视角下在屏幕上显示的图像中勾选或者圈划某个区域。
在步骤S403,接收用户在第二视角下对所述三维CT图像的至少一个区域的选择,产生第二三维描述。例如用户操作输入装置65在另一视角下在屏幕上显示的图像中勾选或者圈划某个区域。第一视角和第二视角之间的角度在预定的范围之中并且所述第一三维描述和所述第二三维描述与相应视角下目标的尺寸、位置或者物理属性中的至少之一相关。例如这两个视角之间的夹角在45度到135度之间。
在步骤S404,基于所述第一三维描述和所述第二三维描述确定所述三维CT图像中的目标。
此外,针对现有技术中的问题,在快速剔除数据的透明区域后,获得投射光线新的射入和射出位置,并记录为深度图。在此基础上,将二维的标记还原出其在体素空间的深度信息。将两次获得的几何体在图像空间进行布尔求交运算,最终获得三维空间中的标记区域。
例如,获得第一视角下的正向面深度图和背向面深度图,根据用户在第一视角下所选择的区域在正向面深度图和背向面深度图中分别进行检索,生成第一包围盒/数据子集,作为第一三维描述。用生成的第一包围盒/数据子集作为绘制范围获得第二视角下的三维绘制结果,并且获得第二视角下的正向面深度图和背向面深度图,根据用户在第二视角下所选择的区域在正向面深度图和背向面深度图中分别进行检索,生成第二包围盒/数据子集,作为第二三维描述。这样在图像空间对第一包围盒/数据子集和第二包围盒/数据子集进行相交运算,来确定所述目标。
在其他实施例中,获得正向面深度图和背向面深度图的步骤包括:渲染场景过程中进行深度测试,记录最小的深度值,得到正向面深度图;渲染场景过程中进行 深度测试,记录最大的深度值,得到背向面深度图。例如,所述第一包围盒/数据子集和第二包围盒/数据子集均为任意方向包围盒/数据子集。
在一些实施例中,可以将三维空间的标记区域融合显示在CT数据中。
例如,在一些实施例中,首先进行透明区域剔除,快速获得数据中非透明区域的紧密层次包围盒/数据子集,然后渲染上述生成的层次包围盒/数据子集,获得正、背向面深度图,此为经过调整的投射光线的射入和射出位置。接下来在当前视线方向进行第一次拾取,使用标记点列在正、背向面深度图中分别检索,生成例如OBB包围盒之类的包围盒。然后,根据上述生成的OBB包围盒,更新光线的投射范围,用户在自动旋转到的正交视角下进行第二次拾取,生成新的OBB包围盒。将前两步获得的OBB包围盒,在图像空间中进行布尔交运算,获得最终的标记区域。最后,用基于空间约束的传递函数,将嫌疑区域融合显示于原数据中。使用本公开的标记方法,能够快速,准确地剔除CT数据中的透明区域,以一种友好的操作方式使用户迅速完成嫌疑区域标记任务。
以上的详细描述通过使用示意图、流程图和/或示例,已经阐述了在安检CT系统中标记的嫌疑物的方法和装置的众多实施例。在这种示意图、流程图和/或示例包含一个或多个功能和/或操作的情况下,本领域技术人员应理解,这种示意图、流程图或示例中的每一功能和/或操作可以通过各种结构、硬件、软件、固件或实质上它们的任意组合来单独和/或共同实现。在一个实施例中,本公开的实施例所述主题的若干部分可以通过专用集成电路(ASIC)、现场可编程门阵列(FPGA)、数字信号处理器(DSP)、或其他集成格式来实现。然而,本领域技术人员应认识到,这里所公开的实施例的一些方面在整体上或部分地可以等同地实现在集成电路中,实现为在一台或多台计算机上运行的一个或多个计算机程序(例如,实现为在一台或多台计算机系统上运行的一个或多个程序),实现为在一个或多个处理器上运行的一个或多个程序(例如,实现为在一个或多个微处理器上运行的一个或多个程序),实现为固件,或者实质上实现为上述方式的任意组合,并且本领域技术人员根据本公开,将具备设计电路和/或写入软件和/或固件代码的能力。此外,本领域技术人员将认识到,本公开所述主题的机制能够作为多种形式的程序产品进行分发,并且无论实际用来执行分发的信号承载介质的具体类型如何, 本公开所述主题的示例性实施例均适用。信号承载介质的示例包括但不限于:可记录型介质,如软盘、硬盘驱动器、紧致盘(CD)、数字通用盘(DVD)、数字磁带、计算机存储器等;以及传输型介质,如数字和/或模拟通信介质(例如,光纤光缆、波导、有线通信链路、无线通信链路等)。
虽然已参照几个典型实施例描述了本公开,但应当理解,所用的术语是说明和示例性、而非限制性的术语。由于本公开能够以多种形式具体实施而不脱离公开的精神或实质,所以应当理解,上述实施例不限于任何前述的细节,而应在随附权利要求所限定的精神和范围内广泛地解释,因此落入权利要求或其等效范围内的全部变化和改型都应为随附权利要求所涵盖。

Claims (9)

  1. 一种定位三维CT图像中的目标的方法,包括步骤:
    显示一幅三维CT图像;
    接收用户在第一视角下对所述三维CT图像的至少一个区域的选择,产生第一三维描述;
    接收用户在第二视角下对所述三维CT图像的至少一个区域的选择,产生第二三维描述,其中第一视角和第二视角之间的角度在预定的范围之中并且所述第一三维描述和所述第二三维描述与相应视角下目标的尺寸、位置或者物理属性中的至少之一相关;以及
    基于所述第一三维描述和所述第二三维描述确定所述三维CT图像中的目标。
  2. 如权利要求1所述的方法,所述产生第一三维描述的步骤包括:
    获得第一视角下的正向面深度图和背向面深度图,根据用户在第一视角下所选择的区域在正向面深度图和背向面深度图中分别进行检索,生成第一包围盒/数据子集,作为所述第一三维描述;
    其中,产生第二三维描述的步骤包括:
    用生成的第一包围盒/数据子集作为绘制范围获得第二视角下的三维绘制结果,并且获得第二视角下的正向面深度图和背向面深度图,根据用户在第二视角下所选择的区域在正向面深度图和背向面深度图中分别进行检索,生成第二包围盒/数据子集,作为所述第二三维描述;
    其中,确定三维CT图像中的目标的步骤包括:
    在图像空间对第一包围盒/数据子集和第二包围盒/数据子集进行相交运算,来确定所述目标。
  3. 如权利要求2所述的方法,其中获得正向面深度图和背向面深度图的步骤包括:
    渲染场景过程中进行深度测试,记录最小的深度值,得到正向面深度图;
    渲染场景过程中进行深度测试,记录最大的深度值,得到背向面深度图。
  4. 如权利要求2所述的方法,其中所述第一包围盒/数据子集和第二包围盒/数据子集均为任意方向包围盒/数据子集。
  5. 如权利要求2所述的方法,将三维空间的标记区域融合显示在CT数据中。
  6. 如权利要求1所述的方法,其中所述预定范围具体为45度到135度。
  7. 一种安检CT系统,包括:
    CT扫描设备,获得所述被检查物体的检查数据;
    存储器,存储所述检查数据;
    显示设备,显示所述被检查物体的一幅三维CT图像;
    输入装置,输入用户在第一视角下对所述三维CT图像的至少一个区域的选择,以及输入用户在第二视角下对所述三维CT图像的至少一个区域的选择,其中第一视角和第二视角之间的角度在预定的范围之中;以及
    数据处理器,根据第一视角下的选择产生第一三维描述,根据第二视角下的选择产生第二三维描述,并基于所述第一三维描述和所述第二三维描述确定所述三维CT图像中的目标,其中所述第一三维描述和所述第二三维描述与相应视角下目标的尺寸、位置或者物理属性中的至少之一相关。
  8. 如权利要求7所述的安检系统,所述数据处理器配置用于:
    获得第一视角下的正向面深度图和背向面深度图,根据用户在第一视角下所选择的区域在正向面深度图和背向面深度图中分别进行检索,生成第一包围盒/数据子集,作为所述第一三维描述;
    用生成的第一包围盒/数据子集作为绘制范围获得第二视角下的三维绘制结果,并且获得第二视角下的正向面深度图和背向面深度图,根据用户在第二视角下所选择的区域在正向面深度图和背向面深度图中分别进行检索,生成第二包围盒/数据子集,作为所述第二三维描述;以及
    在图像空间对第一包围盒/数据子集和第二包围盒/数据子集进行相交运算,来确定所述目标。
  9. 如权利要求8所述的安检系统,其中所述数据处理器配置用于:
    渲染场景过程中进行深度测试,记录最小的深度值,得到正向面深度图;
    渲染场景过程中进行深度测试,记录最大的深度值,得到背向面深度图。
PCT/CN2015/097378 2014-12-18 2015-12-15 一种定位三维ct图像中的目标的方法和安检系统 WO2016095798A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/300,668 US10145977B2 (en) 2014-12-18 2015-12-15 Method for positioning target in three-dimensional CT image and security check system
EP15869305.1A EP3112852A4 (en) 2014-12-18 2015-12-15 Method for positioning target in three-dimensional ct image and security check system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410795840.1A CN105784731B (zh) 2014-06-25 2014-12-18 一种定位三维ct图像中的目标的方法和安检系统
CN201410795840.1 2014-12-18

Publications (1)

Publication Number Publication Date
WO2016095798A1 true WO2016095798A1 (zh) 2016-06-23

Family

ID=56125933

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/097378 WO2016095798A1 (zh) 2014-12-18 2015-12-15 一种定位三维ct图像中的目标的方法和安检系统

Country Status (3)

Country Link
US (1) US10145977B2 (zh)
EP (1) EP3112852A4 (zh)
WO (1) WO2016095798A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108508042A (zh) * 2018-04-02 2018-09-07 合肥工业大学 多视角x射线煤和矸石透射图像检测方法及装置
CN113830510B (zh) * 2020-06-23 2023-05-23 同方威视技术股份有限公司 输送装置、以及检查系统
CN112598682B (zh) * 2020-12-25 2024-03-29 公安部第一研究所 一种基于任意角度的三维ct图像剖切方法及装置
CN113112560B (zh) * 2021-04-14 2023-10-03 杭州柳叶刀机器人有限公司 生理点区域标记方法与装置

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090079738A1 (en) * 2007-09-24 2009-03-26 Swanwa Liao System and method for locating anatomies of interest in a 3d volume
CN101592579A (zh) * 2009-07-03 2009-12-02 公安部第一研究所 利用多视角x射线对行李爆炸物进行自动探测的方法及装置
WO2010119690A1 (ja) * 2009-04-16 2010-10-21 富士フイルム株式会社 診断支援装置、診断支援方法および診断支援プログラムが記憶された記憶媒体
WO2011046511A1 (en) * 2009-10-13 2011-04-21 Agency For Science, Technology And Research A method and system for segmenting a liver object in an image
CN102222352A (zh) * 2010-04-16 2011-10-19 株式会社日立医疗器械 图像处理方法和图像处理装置
EP2713340A2 (en) * 2012-09-29 2014-04-02 Tsinghua University Methods and devices for locating an object in CT imaging

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6721387B1 (en) 2001-06-13 2004-04-13 Analogic Corporation Method of and system for reducing metal artifacts in images generated by x-ray scanning devices
US7606349B2 (en) * 2006-02-09 2009-10-20 L-3 Communications Security and Detection Systems Inc. Selective generation of radiation at multiple energy levels
US20080123895A1 (en) * 2006-11-27 2008-05-29 Todd Gable Method and system for fast volume cropping of three-dimensional image data
CN105784731B (zh) * 2014-06-25 2019-02-22 同方威视技术股份有限公司 一种定位三维ct图像中的目标的方法和安检系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090079738A1 (en) * 2007-09-24 2009-03-26 Swanwa Liao System and method for locating anatomies of interest in a 3d volume
WO2010119690A1 (ja) * 2009-04-16 2010-10-21 富士フイルム株式会社 診断支援装置、診断支援方法および診断支援プログラムが記憶された記憶媒体
CN101592579A (zh) * 2009-07-03 2009-12-02 公安部第一研究所 利用多视角x射线对行李爆炸物进行自动探测的方法及装置
WO2011046511A1 (en) * 2009-10-13 2011-04-21 Agency For Science, Technology And Research A method and system for segmenting a liver object in an image
CN102222352A (zh) * 2010-04-16 2011-10-19 株式会社日立医疗器械 图像处理方法和图像处理装置
EP2713340A2 (en) * 2012-09-29 2014-04-02 Tsinghua University Methods and devices for locating an object in CT imaging

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3112852A4 *

Also Published As

Publication number Publication date
US10145977B2 (en) 2018-12-04
US20170176631A1 (en) 2017-06-22
EP3112852A4 (en) 2017-10-11
EP3112852A1 (en) 2017-01-04

Similar Documents

Publication Publication Date Title
CN105784731B (zh) 一种定位三维ct图像中的目标的方法和安检系统
WO2015172726A1 (zh) 图像显示方法
JP6820884B2 (ja) 放射線透過・蛍光ct結像システム及び結像方法
CN105849772B (zh) 检查系统及方法
JP6415023B2 (ja) 3d散乱撮像に用いるハンドヘルドx線システム
WO2014048080A1 (zh) Ct成像中定位物体的方法以及设备
WO2016095798A1 (zh) 一种定位三维ct图像中的目标的方法和安检系统
WO2016095776A1 (zh) 一种定位三维ct图像中的目标的方法和安检ct系统
US20130279645A1 (en) Methods and systems for volumetric reconstruction using radiography
JP6298451B2 (ja) 画像処理システム及び画像処理方法
WO2017012562A1 (zh) 在安检系统中估算被检查物体重量的方法和装置
US20080285715A1 (en) Method and apparatus for shadow aperture backscatter radiography (SABR) system and protocol
JP2015187567A (ja) 放射線計測装置
US20240044812A1 (en) Rotational X-ray Inspection System and Method
CN104254786B (zh) 计算断层摄影成像方法和系统
CN105612433B (zh) 层析x射线照相合成成像
CN103892852B (zh) 检查系统和检查方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15869305

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2015869305

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 15300668

Country of ref document: US

Ref document number: 2015869305

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE