WO2016095799A1 - 安检ct系统及其方法 - Google Patents

安检ct系统及其方法 Download PDF

Info

Publication number
WO2016095799A1
WO2016095799A1 PCT/CN2015/097379 CN2015097379W WO2016095799A1 WO 2016095799 A1 WO2016095799 A1 WO 2016095799A1 CN 2015097379 W CN2015097379 W CN 2015097379W WO 2016095799 A1 WO2016095799 A1 WO 2016095799A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
inspection
virtual
contraband
inspection image
Prior art date
Application number
PCT/CN2015/097379
Other languages
English (en)
French (fr)
Inventor
陈志强
张丽
王朔
孙运达
黄清萍
唐智
Original Assignee
同方威视技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 同方威视技术股份有限公司 filed Critical 同方威视技术股份有限公司
Publication of WO2016095799A1 publication Critical patent/WO2016095799A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • G01V5/20
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N23/00Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
    • G01N23/02Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
    • G01N23/04Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and forming images of the material
    • G01V5/226
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]

Definitions

  • the present application relates to security inspection, and in particular to a security inspection CT system and method thereof.
  • the multi-energy X-ray safety inspection system is a new security inspection system developed on the basis of a single-energy X-ray safety inspection system. It not only provides the shape and content of the object to be inspected, but also provides information reflecting the effective atomic number of the object to be inspected, thereby distinguishing whether the object is organic or inorganic, and displaying it on a color monitor with different colors to help operate Personnel make judgments.
  • TIP is an important requirement.
  • the so-called TIP refers to the insertion of pre-acquired images of dangerous goods into the image of the baggage package, that is, the insertion of a virtual dangerous goods image (Fictional Threat Image). It plays an important role in the training of security inspectors and the assessment of the efficiency of security inspectors.
  • For the two-dimensional TIP of the X-ray security inspection system there are already mature solutions and wide applications.
  • For the 3D TIP of security CT there is currently no such function provided by the manufacturer.
  • the present disclosure proposes a security CT system and method thereof that can facilitate a user to quickly mark a suspect in a CT image and give feedback on whether or not to include a virtual dangerous goods image.
  • a method in a security CT system comprising the steps of: reading inspection data of an object to be inspected; and inserting at least one 3D virtual contraband image into the 3D inspection image of the object to be inspected ( Fictional Threat Image), the 3D inspection image is obtained from the inspection data; receiving at least one of a 3D inspection image including the 3D virtual contraband image or a pair including the 3D virtual contraband Selection of at least one region of the 2D inspection image, such as a corresponding 2D virtual contraband image, obtained from the 3D inspection image or obtained from the inspection data; and responsive to The selection is to give feedback related to the inclusion of at least one 3D virtual contraband image in the 3D inspection image.
  • receiving at least one of a 3D inspection image including the 3D virtual contraband image or a 2D inspection image including a 2D virtual contraband image corresponding to the 3D virtual contraband image includes receiving a coordinate position of a portion of the 3D inspection image or the 2D inspection image associated with the selection.
  • the step of responding to the selecting to give feedback related to the at least one 3D virtual contraband image in the 3D inspection image comprises at least one of: determining whether the selected at least one region is present The at least one 3D virtual contraband image, the pop-up dialog box confirms that at least one 3D virtual contraband image is included in the 3D inspection image, and the text prompt is confirmed on the interface to include at least one 3D virtual contraband in the 3D inspection image.
  • a product image highlighting a portion of the 3D inspection image or 2D inspection image associated with the selection, marking a portion of the 3D inspection image or 2D inspection image associated with the selection, using a specific A color or graphic fills the portion of the 3D inspection image or 2D inspection image that is associated with the selection.
  • At least one spatial feature parameter of the object under inspection is calculated from the inspection data, and at least one 3D virtual contraband image is inserted into the 3D inspection image of the object under inspection based on the spatial feature parameter.
  • the spatial feature parameter relates to at least one of a position, a size, and a direction of a 3D virtual contraband image to be inserted.
  • the selection of the at least one region comprises a selection of a portion of the displayed 3D inspection image at an angle of view.
  • point cloud information characterizing the inspected object is recorded, in response to the selecting to give at least one 3D virtual contraband image included in the 3D inspection image
  • the step of correlating feedback includes: obtaining a sequence of point cloud information clusters of different objects in the object to be inspected by segmentation; determining at least one selected region from a sequence of point cloud information clusters of different objects based on a predetermined reference; determining the at least one Whether the at least one 3D virtual contraband image exists in the selected area.
  • the selection of the at least one region comprises a selection of a portion of the displayed 3D inspection image over a plurality of different perspectives.
  • the selecting of the at least one region comprises selecting a portion of the displayed 3D inspection image at two different viewing angles, the two different viewing angles being substantially orthogonal to each other, wherein the inspection data is performed Clearing the transparent area, obtaining a hierarchical bounding box of the non-transparent area in the inspection data, and then rendering the scene depth to the hierarchical bounding box, obtaining a forward plane depth map and a back plane depth map, in response to the selecting to give
  • the step of extracting the feedback related to the at least one 3D virtual contraband image in the 3D inspection image includes: respectively searching, generating, in the forward plane depth map and the back plane depth map according to the region selected by the user in the first perspective a first bounding box; illuminating the ray with the generated first bounding box as a texture carrier; in the front-facing depth map and the back-facing depth map according to a region selected by the user at a second viewing angle substantially orthogonal to the first viewing angle Retrieving separately, generating a second bounding box
  • the step of inserting at least one 3D virtual contraband image into the 3D inspection image of the inspected object comprises: segmenting the 3D inspected image to obtain a plurality of 3D sub-images of the inspected object; Distance and position between multiple 3D sub-images; inserting a 3D virtual contraband image based on the calculated distance and location.
  • the step of inserting at least one 3D virtual contraband image into the 3D inspection image of the object under inspection includes determining a transparent portion and a non-transparent in volume data of the object under inspection based on the voxel value of the voxel Part: determining the position and size of the object package to be inspected from the opaque portion of the volume data; determining the candidate insertion position in the transparent area within the range of the package; selecting at least one position from the candidate insertion position according to a predetermined criterion to insert at least one 3D ban Product image.
  • the step of inserting at least one 3D virtual contraband image into the 3D inspection image of the object under inspection comprises: culling the background image in the 2D inspection image to obtain a 2D foreground image; determining the 2D virtual contraband image in 2D a 2D insertion position in the foreground image; determining a position of the 3D virtual contraband image in the 3D inspection image along the depth direction of the 2D insertion position; inserting at least one 3D virtual contraband image at the determined position.
  • the method further comprises inserting a 2D virtual contraband image corresponding to the at least one 3D virtual contraband image into the 2D inspection image of the object under inspection.
  • a security CT system comprising: a CT scanning device that obtains inspection data of the object to be inspected; a memory that stores the inspection data; and a display device that displays the object to be inspected 3D inspection image and/or 2D inspection image obtained from the inspection data, the 2D inspection image is obtained from the 3D inspection image, or obtained from the inspection data; a processor inserting at least one 3D virtual contraband image into the 3D inspection image of the object to be inspected; and inputting means for receiving at least one of the 3D inspection images including the 3D virtual contraband image An area or a selection of at least one of the 2D inspection images including the 2D virtual contraband image corresponding to the 3D virtual contraband image; wherein the data processor is responsive to the selection to give
  • the 3D inspection image includes at least one 3D virtual contraband image related feedback.
  • the data processor calculates at least one spatial feature parameter of the object under inspection according to the inspection data, and inserts at least one 3D into a 3D inspection image of the object to be inspected based on the spatial feature parameter Virtual contraband image.
  • the spatial feature parameter is related to at least one of a position, a size, and a direction of a 3D virtual contraband image to be inserted.
  • a method for marking a suspect in a security CT system comprising the steps of: performing transparent region culling on CT data obtained by a security CT system to obtain a hierarchy of non-transparent regions in the CT data. a bounding box; rendering the scene depth to the hierarchical bounding box, obtaining a forward plane depth map and a back plane depth map; respectively using the mark made by the user in the line of sight direction in the forward plane depth map and the back plane depth map respectively Generating a first bounding box; using the generated first bounding box as a texture carrier for ray casting; and using the mark made by the user in a direction orthogonal to the line of sight direction to search separately in the front depth map and the back surface depth map, Generating a second bounding box; performing a Boolean operation on the first bounding box and the second bounding box in the image space to obtain a marked area in the three-dimensional space; and displaying the marked area of the three-dimensional space in the CT data.
  • the step of culling the transparent region comprises: sampling the CT data along the line of sight direction; performing a volume rendering integration on the line segment between each two sampling points by using a pre-integration query table based on the opacity, to obtain the line segment corresponding The opacity; the octree coding algorithm is used to divide and eliminate the transparent area, and the hierarchical bounding box corresponding to the opaque data area is obtained.
  • the step of rendering the depth of the scene comprises: culling the larger segment of the depth value comparison to obtain a forward plane depth map; and culling the smaller segment of the depth value comparison to obtain a back plane depth map.
  • the first bounding box and the second bounding box are both bounding the box in any direction.
  • the spatial constraint-based transfer function fuses the marked regions of the three-dimensional space in the CT data.
  • an apparatus for marking a suspect in a security CT system comprising: performing transparent region culling on CT data obtained by a security CT system to obtain a hierarchy of non-transparent regions in the CT data Means for enclosing the box; rendering the depth of the scene to the hierarchical bounding box, obtaining a device for the forward plane depth map and the back plane depth map; using the markings made by the user in the line of sight direction in the forward plane depth map and the back plane depth Retrieving means for generating a first bounding box respectively; means for ray casting using the generated first bounding box as a texture carrier; using a mark made in a direction orthogonal to the direction of the line of sight of the user in front of the depth map and back a device for respectively generating a second bounding box in the depth map; a device for performing a Boolean operation on the first bounding box and the second bounding box in the image space to obtain a marked area in the three-dimensional space; and marking the three-
  • the apparatus for transparent area culling includes: means for sampling CT data along a line of sight direction; performing a volume rendering integration on a line segment between each two sampling points by using a lookup table method to obtain opacity of the corresponding line segment Apparatus; using an octree coding algorithm to segment a transparent area to obtain a hierarchical bounding box.
  • the means for rendering the depth of the scene comprises: culling a larger segment of the depth value comparison to obtain a device for the forward surface depth map; and culling the smaller segment of the depth value comparison to obtain a device for the back surface depth map.
  • FIG. 1 is a block diagram showing the structure of a security CT system according to an embodiment of the present disclosure
  • Figure 2 is a block diagram showing the structure of a computer data processor as shown in Figure 1;
  • FIG. 3 is a block diagram showing the structure of a controller according to an embodiment of the present disclosure
  • FIG. 4A is a schematic flow chart depicting a method in a security system in accordance with an embodiment of the present disclosure
  • 4B is a flowchart depicting a method of marking a suspect in a CT system, in accordance with one embodiment of the present disclosure
  • Figure 5 is a schematic diagram depicting an octree decomposition algorithm
  • FIG. 6 is a schematic diagram of a hierarchical bounding box obtained by using an octree splitting algorithm in an embodiment of the present disclosure
  • FIG. 7 is a schematic diagram of a forward plane depth map obtained in an embodiment of the present disclosure.
  • Figure 8 is a schematic illustration of a back-facing depth map obtained in an embodiment of the present disclosure.
  • FIG. 9 is a schematic view depicting a radiation transmission process used in an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram showing a mark drawn by a user in an embodiment of the present disclosure.
  • Figure 11 is a diagram showing a process of performing forward face retrieval and back face retrieval using a user's mark
  • FIG. 12 is a diagram showing the results obtained by performing forward face search and back face search in the embodiment of the present disclosure.
  • Figure 13 is a diagram showing an OBB bounding box of a marker point column obtained in an embodiment of the present disclosure
  • Figure 14 is a diagram showing the update of the result of the previous marking to obtain a new ray casting range
  • Figure 15 is a diagram showing the result of performing the second marking in the orthogonal direction in the embodiment of the present disclosure.
  • Figure 16 shows the results obtained by performing a face-to-face search and a face-to-face search using a second mark in an embodiment of the present disclosure
  • FIG. 17 is a schematic diagram showing an OBB bounding box of a marker point column obtained in an embodiment of the present disclosure
  • Figure 18 is a diagram showing a process of performing a Boolean operation on two objects in an image space used in an embodiment of the present disclosure
  • FIG. 20 is a diagram showing the fusion of the marked suspects in the original data in the embodiment of the present disclosure.
  • references to "one embodiment”, “an embodiment”, “an” or “an” or “an” or “an” or “an” In at least one embodiment.
  • the appearances of the phrase “in one embodiment”, “in the embodiment”, “the” Furthermore, the particular features, structures, or characteristics may be combined in one or more embodiments or examples in any suitable combination and/or sub-combination.
  • the term “and/or” as used herein includes any and all combinations of one or more of the associated listed items.
  • embodiments of the present disclosure provide inspection data for reading an object to be inspected.
  • At least one 3D virtual contraband image (Fictional Threat Image) is inserted into the 3D inspection image of the object to be inspected, and the 3D inspection image is obtained from the inspection data.
  • the 2D inspection image is obtained from the 3D inspection image or obtained from inspection data.
  • the selection is responsive to the feedback to provide feedback related to the inclusion of at least one 3D virtual contraband image in the 3D inspection image.
  • FIG. 1 is a schematic structural view of a CT system according to an embodiment of the present disclosure.
  • the CT apparatus includes a chassis 20, a carrier mechanism 40, a controller 50, a computer data processor 60, and the like.
  • the gantry 20 includes a source 10 that emits X-rays for inspection, such as an X-ray machine, and a detection and acquisition device 30.
  • the carrier mechanism 40 carries the scanned area between the source 10 of the inspected baggage 20 passing through the frame 20 and the detecting and collecting device 30, while the frame 20 is rotated about the direction of advancement of the inspected baggage 70, thereby being emitted by the source 10
  • the rays can pass through the inspected baggage 70 to perform a CT scan of the inspected baggage 70.
  • the detecting and collecting device 30 is, for example, a detector having an integral module structure and a data collector, such as a flat panel detector, for detecting rays transmitted through the object to be inspected, obtaining an analog signal, and converting the analog signal into a digital signal, thereby outputting the output.
  • the projection data of the luggage 70 for X-rays is checked.
  • the controller 50 is used to control the various parts of the entire system to work synchronously.
  • the computer data processor 60 is used to process the data collected by the data collector, process and reconstruct the data, and output the results.
  • the radiation source 10 is placed on the side where the object to be inspected can be placed, and the detecting and collecting device 30 is placed on the other side of the checked baggage 70, including a detector and a data collector for acquiring the checked baggage.
  • the data collector includes a data amplification forming circuit that can operate in a (current) integration mode or a pulse (count) mode.
  • the data output cable of the detection and acquisition device 30 is coupled to the controller 50 and computer data processor 60 for storing the acquired data in computer data processor 60 in accordance with a trigger command.
  • FIG. 2 shows a block diagram of the computer data processor 60 shown in FIG. 1.
  • the data collected by the data collector is stored in the memory 61 via the interface unit 68 and the bus 64.
  • Configuration information and a program of the computer data processor are stored in a read only memory (ROM) 62.
  • a random access memory (RAM) 63 is used to temporarily store various data during the operation of the processor 66.
  • a computer program for performing data processing is also stored in the memory 61.
  • the internal bus 64 is connected to the above-described memory 61, read only memory 62, random access memory 63, input device 65, processor 66, display device 67, and interface unit 68.
  • the instruction code of the computer program instructs the processor 66 to execute a predetermined data processing algorithm, and after obtaining the data processing result, display it on, for example, an LCD display.
  • the processing result is outputted on the display device 67 of the class, or directly in the form of a hard copy such as printing.
  • FIG. 3 shows a structural block diagram of a controller in accordance with an embodiment of the present disclosure.
  • the controller 50 includes a control unit 51 that controls the radiation source 10, the carrier mechanism 40, and the detecting and collecting device 30 according to an instruction from the computer 60, and a trigger signal generating unit 52 for the control unit.
  • a trigger command for triggering the action of the radiation source 10, the detecting and collecting device 30, and the carrying mechanism 40 is generated under control;
  • the first driving device 53 is triggered by the trigger signal generating unit 52 under the control of the control unit 51.
  • the drive carrier 40 transmits the checked baggage 70;
  • the second drive device 54 rotates according to the trigger command frame 20 generated by the trigger signal generating unit 52 under the control of the control unit 51.
  • the obtained projection data is stored in the computer 60 for CT tomographic image reconstruction, thereby obtaining tomographic image data of the checked baggage 70.
  • the computer 60 then obtains a DR image of at least one viewing angle of the checked baggage 70 from the tomographic image data, for example by executing software, and displays it along with the reconstructed three-dimensional image to facilitate the security check by the panelist.
  • the CT imaging system described above may also be a dual energy CT system, that is, the X-ray source 10 of the gantry 20 is capable of emitting both high and low energy rays, and the detection and acquisition device 30 detects projections at different energy levels.
  • dual-energy CT reconstruction is performed by computer data processor 60 to obtain equivalent atomic number and electron density data for each slice of baggage inspected 70.
  • FIG. 4A is a schematic flow chart depicting a method in a security system in accordance with an embodiment of the present disclosure.
  • step S401 the inspection data of the object to be inspected is read.
  • step S402 at least one 3D virtual contraband image (Fictional Threat Image) is inserted into the 3D inspection image of the object to be inspected, and the 3D inspection image is obtained from the inspection data.
  • the data processor selects one or more 3D images from the virtual dangerous goods image library to be inserted into the 3D inspection image of the object to be inspected.
  • step S403 receiving at least one of a 3D inspection image including the 3D virtual contraband image or a 2D inspection image including a 2D virtual contraband image corresponding to the 3D virtual contraband image Selection of at least one region of the image obtained from the 3D inspection image or obtained from the inspection data. For example, the user operates the input device to check or circle an area in the image displayed on the screen.
  • step S404 in response to the selecting to give feedback related to the inclusion of at least one 3D virtual contraband image in the 3D inspection image.
  • receiving at least one of a 3D inspection image including the 3D virtual contraband image or a 2D inspection including a 2D virtual contraband image corresponding to the 3D virtual contraband image includes receiving a coordinate position of a portion of the 3D inspection image or the 2D inspection image associated with the selection.
  • the step of responding to the selection to give feedback related to the inclusion of at least one 3D virtual contraband image in the 3D inspection image comprises at least one of the following:
  • the portion of the 3D inspection image or 2D inspection image associated with the selection is filled with a particular color or graphic.
  • At least one spatial feature parameter of the object under inspection is calculated based on the inspection data, and at least one 3D virtual contraband image is inserted into the 3D inspection image of the object to be inspected based on the spatial feature parameter.
  • the spatial feature parameter is related to at least one of a location, a size, and a direction of a 3D virtual contraband image to be inserted.
  • the selecting of the at least one region comprises selecting a portion of the displayed 3D inspection image at a viewing angle.
  • point cloud information characterizing the object under inspection is recorded, in response to the selection to give feedback related to the image containing at least one 3D virtual contraband in the 3D inspection image
  • the selection of the at least one region in other embodiments includes the selection of a portion of the displayed 3D inspection image over a plurality of different perspectives.
  • the selection of the at least one region includes a selection of a portion of the displayed 3D inspection image at two different viewing angles, the two different viewing angles being substantially orthogonal to each other, wherein the inspection data is transparently culled
  • the step of including at least one 3D virtual contraband image related feedback in the 3D inspection image includes: respectively searching in the forward plane depth map and the back plane depth map according to the region selected by the user in the first perspective to generate the first enclosure a ray projection using the generated first bounding box as a texture carrier; respectively searching for a region selected from a front view to a depth map and a back surface depth map according to
  • the step of inserting at least one 3D virtual contraband image into the 3D inspection image of the object to be inspected comprises: segmenting the 3D inspected image to obtain a plurality of 3D sub-images of the inspected object; The distance and position between the plurality of 3D sub-images; the 3D virtual contraband image is inserted based on the calculated distance and position.
  • the step of inserting at least one 3D virtual contraband image into the 3D inspection image of the object under inspection comprises: determining a transparent portion in the volume data of the object under inspection based on the voxel value of the voxel a non-transparent portion; determining a position and a size of the object package to be inspected from an opaque portion of the volume data; determining a candidate insertion position in the transparent region within the range of the bag; selecting at least one selected region from the candidate insertion positions according to a predetermined criterion; Whether the at least one 3D virtual contraband image exists in the at least one selected area.
  • the step of inserting at least one 3D virtual contraband image into the 3D inspection image of the object under inspection comprises: culling the background image in the 2D inspection image to obtain a 2D foreground image; determining the 2D virtual contraband image a 2D insertion position in the 2D foreground image; determining a position of the 3D virtual contraband image in the 3D inspection image along the depth direction of the 2D insertion position; inserting at least one 3D virtual contraband image at the determined position.
  • a 2D virtual contraband corresponding to the at least one 3D virtual contraband image may also be inserted into the 2D inspection image of the inspected object. image.
  • some embodiments of the present disclosure propose a technique for quickly marking suspects. After the transparent area of the data is quickly culled, the new injection and ejection positions of the projected light are obtained and recorded as a depth map. On this basis, the two-dimensional mark is restored to its depth information in the voxel space. The obtained geometry is subjected to Boolean intersection in the image space, and finally the marked area in the three-dimensional space is obtained.
  • transparent region culling is first performed to quickly obtain a tightly bounding box of non-transparent regions in the data, and then the generated hierarchical bounding box is rendered to obtain a front and back plane depth map, which is adjusted. Projects the incident and exit positions of the light.
  • the first pick is performed in the current line of sight direction, and the marker points are respectively retrieved in the forward and back plane depth maps to generate a bounding box such as an OBB bounding box.
  • the projection range of the light is updated, and the user is automatically
  • the second pick is performed under the orthogonal view rotated to generate a new OBB bounding box.
  • the OBB bounding box obtained in the first two steps is subjected to a Boolean intersection operation in the image space to obtain the final marked area. Finally, the suspected region is fused and displayed in the original data using a space-constrained transfer function.
  • FIG. 4B is a flow chart describing a method of marking a suspect in a CT system, in accordance with one embodiment of the present disclosure.
  • the CT device obtains the CT data
  • the transparent region in the CT data is first proposed.
  • the new injecting and ejecting positions of the light are recorded as depth maps.
  • the depth information in the voxel space is restored by querying the two-dimensional mark in the depth map.
  • the obtained geometry is subjected to Boolean intersection in the image space, and finally the marked area in the three-dimensional space is obtained.
  • step S411 the CT data obtained by the security CT system is subjected to transparent region culling based on pre-integration, and a hierarchical bounding box of the non-transparent region in the CT data is obtained.
  • the three-dimensional data field processed by volume rendering is discrete data defined in three dimensions, and the entire data field is represented by a discrete three-dimensional matrix.
  • Each small square in the three-dimensional space represents a scalar value called a voxel.
  • the voxel can be used as a sampling point of the three-dimensional data field, and the scalar value obtained by sampling is s.
  • the volume data is first classified to specify the color and attenuation coefficient.
  • the volume data intensity s is mapped to the color I(s) and the attenuation coefficient ⁇ (s) by introducing a transfer function.
  • the transfer function is determined by the grayscale data of the dual-energy CT and the material data, and is also referred to as a two-dimensional color table.
  • the Nyquist sampling frequency of the opacity function ⁇ (s(x)) is equal to the maximum Nyquist sampling frequency and scalar value s(x) of ⁇ (s).
  • the product of the Nyquist sampling frequency Since the attenuation coefficient has a nonlinear characteristic, it causes a sharp increase in the sampling frequency of Nyquist.
  • a pre-integration method is employed. At the same time, after using the pre-integration method, it is possible to quickly determine whether the CT data of a block is transparent.
  • the pre-integration is mainly divided into two steps.
  • the first step is to sample the continuous scalar field s(x) along the line of sight. At this time, the sampling frequency value is not affected by the transfer function; the second step is to find the table by means of a lookup table.
  • the line segment between the sampling points is subjected to volume rendering integration.
  • the lookup table has three parameters, the starting point of the line segment, the end point of the line segment, and the length of the line segment. To set the length of the line segment to be constant, only two parameters, the starting point of the line segment and the end point of the line segment, need to be considered when performing the lookup table calculation.
  • An octree is a tree-like data structure used to describe three-dimensional space.
  • Figure 5 is a schematic diagram depicting an octree decomposition algorithm. Each node of the octree represents the volume element of a cube. Each node has eight child nodes, and the volume elements represented by the eight child nodes are added together to be equal to the volume of the parent node. As shown in Figure 5, the octree includes eight nodes ulf, urf, ulb, urb, llf, lrf, llb, and lrb.
  • the octree of the form V with respect to the cube C can be defined by the following recursive method: octree
  • Each node of the tree corresponds to a sub-cube of C.
  • sub-cube As long as a sub-cube is not completely blank or is completely occupied by V, it is equally divided into eight, so that the corresponding node has eight child nodes. Such recursive judgments and divisions are always made to the cube corresponding to the node or completely blank, or completely occupied by V, or its size is already a predefined sub-cube size.
  • the volume data is divided layer by layer.
  • the maximum value s max and minimum value s min of all voxels in the corresponding sub-block of the leaf node are counted, and the axis corresponding to the sub-block To the bounding box and volume value.
  • the nodes are merged layer by layer to construct an octree, and the octree is divided into five.
  • the octree is traversed, and the visibility state of each hierarchical node is recursively set.
  • the states are transparent, partially transparent, and opaque. Its state is determined by the state of the child nodes contained in the node. If all child nodes are transparent, the current node is transparent; if all child nodes are opaque, the current node is opaque; if some child nodes are transparent, the current node is semi-transparent.
  • the state is only transparent and opaque. The leaf node visibility status is obtained from the opacity query.
  • the minimum and maximum gray scales (s min , s max ) of each sub-block are already stored, and the current opacity query function ⁇ (s f , s b ) is used to quickly obtain the current
  • the opacity ⁇ of the sub-block if ⁇ ⁇ ⁇ ⁇ , the current leaf node is opaque, where ⁇ ⁇ is the set opacity threshold. As shown in FIG. 6, after the transparent block is removed, the remaining opaque portion, wherein the large rectangular box is represented as the original data size.
  • step S412 the scene depth is rendered on the hierarchical bounding box to obtain a forward plane depth map and a back plane depth map.
  • step S413 the markers made in the line of sight direction by the user are respectively retrieved in the forward plane depth map and the back plane depth map to generate a first bounding box.
  • 7 is a schematic diagram of a forward plane depth map obtained in an embodiment of the present disclosure.
  • Figure 8 is a schematic illustration of a back-facing depth map obtained in an embodiment of the present disclosure.
  • a three-dimensional model is needed as the carrier of the volume texture.
  • the volume texture corresponds to the model by the texture coordinates, and then the viewpoint is directed to the point on the model.
  • the ray crossing the model space is equivalent to the ray crossing the body texture. This will determine the location of the incident and exit of the projected ray, which translates into the intersection of the ray and the volume texture carrier.
  • the scene depth map is rendered on the hierarchical bounding box obtained above, and the segment having a larger depth value is removed to obtain a forward plane depth map.
  • the color value of each pixel on the forward plane depth map represents a certain The distance from the point closest to the viewpoint in the direction.
  • the segment with a smaller depth value is removed, and the depth map of the scene is rendered to obtain a back plane depth map.
  • the color value of each pixel on the back plane depth map represents the point farthest from the viewpoint in a certain direction. distance.
  • FIG. 9 is a schematic diagram depicting a ray transmission process used in an embodiment of the present disclosure.
  • the basic flow of ray casting is to emit a ray from a fixed direction of each pixel of the image.
  • the ray traverses the entire sequence of images.
  • the image sequence is sampled and classified to obtain color values, and the color is based on the ray absorption model.
  • the values are accumulated until the light traverses the entire sequence of images, and the resulting color value is the color of the rendered image.
  • the projection plane shown in Fig. 9 is the aforementioned "image".
  • the ray casting ultimately yields a two-dimensional image that does not restore the depth information of the voxels that are projected along the pixel.
  • the marking event is discretized into a point column, and respectively retrieved in the forward and back surface depth maps, and the projection result of the marking area on the depth map is obtained.
  • Figure 12 shows a schematic diagram of a process for forward face retrieval and back face retrieval using a user's mark. At this point, we restore a two-dimensional mark operation on the screen image to a three-dimensional mark in the voxel space.
  • the suspect area After completing a mark, the suspect area is still larger in scope. In order to continue cropping this suspect area, it is necessary to calculate the OBB level bounding box corresponding to the marker point column of the voxel space.
  • the basic idea of the bounding box method is to use simple geometry instead of complex and strange geometry. Firstly, the bounding box of the object is roughly detected. When the bounding boxes intersect, the surrounding geometric shapes are likely to intersect; when the bounding boxes do not intersect, The geometry enclosed by it must not intersect; this eliminates a large number of geometric and geometric parts that are impossible to intersect, thus quickly finding intersecting geometric parts.
  • bounding boxes There are several types of bounding boxes: the bounding box AABB along the coordinate axis, the surrounding ball, the bounding box OBB in any direction, and the k-dop bounding box with a broader meaning to weigh the envelope tightness of various bounding boxes and To calculate the cost, use the OBB bounding box to calculate the marker point column.
  • the key to OBB bounding box calculation is to find the best direction and determine the minimum size of the bounding box that surrounds the object in that direction.
  • the position and direction of the bounding box are calculated using the primary moment (mean) and the second moment (covariance matrix) statistic.
  • the eigenvectors of the covariance matrix can be solved and unitized using numerical methods. Since C is a real symmetric matrix, the eigenvectors of the matrix C are perpendicular to each other and can serve as the direction axis of the bounding box. Project the vertices of the geometry to be surrounded onto the direction axis to find the projection interval of each direction axis. The length of each projection interval is the corresponding size of the bounding box.
  • FIG. 13 is a diagram showing an OBB bounding box of a marker point column obtained in the embodiment of the present disclosure.
  • step S414 the generated first bounding box is used as the texture carrier for ray casting; in step S415, the markings made by the user in the direction orthogonal to the line of sight direction are respectively retrieved in the front depth map and the back plane depth map. , generating a second bounding box.
  • the portion outside the area is culled, and the generated OBB bounding box is used as a new volume texture carrier for ray casting.
  • FIG. 15 is a diagram showing the result of performing the second marking in the orthogonal direction in the embodiment of the present disclosure.
  • Figure 16 shows the results obtained by performing a face-to-face search and a face-to-face search using the second mark in the embodiment of the present disclosure.
  • Fig. 17 is a view showing an OBB bounding box of a marker point column obtained in the embodiment of the present disclosure.
  • step S416 a Boolean intersection operation is performed on the first bounding box and the second bounding box in the image space to obtain a marked area in the three-dimensional space.
  • FIG. 18 is a diagram showing a process of performing a Boolean operation on two objects in an image space used in the embodiment of the present disclosure.
  • the calculation is performed using the CSG method.
  • One is based on object space: converting the CSG model directly into a set of polygons and then rendering it with OpenGL, which is converted to a B-rep model.
  • the typical method but the model conversion is inefficient and inconvenient to modify dynamically; the second is based on image space, which is the method used in this paper.
  • Parity is used here to determine whether a point is inside a given physical space.
  • parity can be used to determine whether any point in the space is inside a given volume, but since OpenGL depth buffer can only be used for each pixel. Save a depth value, so the parity process for rendering the intersection of entities A and B is: first find the part of A in B and draw it, then find the part of B in A and draw. At this point, the front of A in B has been rendered. To get the front of B in A, the pixels in the depth buffer that are covered by the front of B are first re-rendered. This is because after the previous operation, all parts of A are in the depth buffer, and the part of A outside B may obscure the part that B is visible.
  • Figure 19 shows a schematic diagram of obtaining a three-dimensionally marked region of the final suspect in the practice of the present disclosure.
  • step S417 the marked area of the three-dimensional space is fused and displayed in the CT data.
  • the suspect area needs to be displayed in the original data with a higher visual priority.
  • the final suspect area may not be a regular rectangular shape, and a space constraint based transfer function is used here.
  • the transparent and opaque regions of the volume data are determined according to the opacity, and the portion of the volume that satisfies the requirements is selected as the candidate insertion position in the blank area of the luggage, according to the distance from the position to the viewing plane and the number of objects around it. Determine an insertion location that specifies the degree of concealment.
  • a luminosity-based pre-integration lookup table generation is first performed for rapid determination of transparent and non-transparent regions of volume data.
  • the volume data opaque octree is then constructed to determine the location and size of the bin in the CT data.
  • the volume data transparent octree is constructed.
  • the transparent octree only counts the transparent part in the data area, completely culling the opaque part, thereby obtaining the area in the box for insertion.
  • the portion of the transparent area where the volume meets the insertion requirement is selected as the candidate insertion position.
  • the final insertion position is determined based on the specified degree of insertion concealment.
  • the solution of the above embodiment can quickly insert the dangerous goods image in the CT data and the insertion position can be ensured to be located in the luggage; the inserted image does not cover the original items of the luggage; the degree of concealment of the insertion can be set by parameters; real-time.
  • aspects of the embodiments disclosed herein may be implemented in an integrated circuit as a whole or in part, as one or more of one or more computers running on one or more computers.
  • a computer program eg, implemented as one or more programs running on one or more computer systems
  • processors eg, implemented as one or One or more programs running on a plurality of microprocessors, implemented as firmware, or substantially in any combination of the above
  • Those skilled in the art, in accordance with the present disclosure, will be provided with the ability to design circuits and/or write software and/or firmware code.
  • signal bearing media include, but are not limited to, recordable media such as floppy disks, hard drives, compact disks (CDs), digital versatile disks (DVDs), digital tapes, computer memories, and the like; and transmission-type media such as digital and / or analog communication media (eg, fiber optic cable, waveguide, wired communication link, wireless communication link, etc.).

Abstract

一种安检CT系统及其方法。读取被检查物体的检查数据(S401)。向被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像(Fictional Threat Image),该3D检查图像是从检查数据得到的(S402)。接收对包括该3D虚拟违禁品图像在内的3D检查图像中的至少一个区域或者对包括与该3D虚拟违禁品图像相应的2D虚拟违禁品图像在内的2D检查图像中的至少一个区域的选择,该2D检查图像是从所述3D检查图像得到的,或者是从检查数据得到的(S403)。响应于该选择以给出与3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈(S404)。利用上述方案,能够方便用户迅速标记CT图像中的嫌疑物并且给出是否包含虚拟危险品图像的反馈。

Description

安检CT系统及其方法 技术领域
本申请涉及安全检查,具体涉及一种在安检CT系统及其方法。
背景技术
多能量X射线安全检查系统,是在单能量X射线安全检查系统的基础上开发的新型安检系统。它不仅能提供被检物的形状和内容,还能提供反映被检物品有效原子序数的信息,从而区分被检物是有机物还是无机物,并用不同的颜色在彩色监视器上显示出来,帮助操作人员进行判别。
对于安检领域,TIP是一项重要的需求,所谓TIP是指在行李包裹图像中插入预先采集的危险品图像,也就是插入虚拟危险品图像(Fictional Threat Image)。它对于安检员的培训以及安检员工作效率的考核具有重要的作用。对于X射线安全检查系统的二维TIP,已经有成熟的方案和广泛的应用。但对于安检CT的三维TIP,目前还没有厂商提供这样的功能。
发明内容
考虑到现有技术中的一个或者多个技术问题,本公开提出了一种安检CT系统及其方法,能够方便用户迅速标记CT图像中的嫌疑物并且给出是否包含虚拟危险品图像的反馈。
在本公开的一个方面,提出了一种安检CT系统中的方法,包括步骤:读取被检查物体的检查数据;向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像(Fictional Threat Image),所述3D检查图像是从所述检查数据得到的;接收对包括所述3D虚拟违禁品图像在内的3D检查图像中的至少一个区域或者对包括与所述3D虚拟违禁品图像相应的2D虚拟违禁品图像在内的2D检查图像中的至少一个区域的选择,所述2D检查图像是从所述3D检查图像得到的,或者是从所述检查数据得到的;以及响应于所述选择以给出与所述3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈。
根据一些实施例,接收对包括所述3D虚拟违禁品图像在内的3D检查图像中的至少一个区域或者对包括与所述3D虚拟违禁品图像相应的2D虚拟违禁品图像在内的2D检查图像中的至少一个区域的选择的步骤包括:接收所述3D检查图像或2D检查图像中与所述选择相关联的部分的坐标位置。
根据一些实施例,响应于所述选择以给出与所述3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈的步骤包括以下至少之一:判断所述选择的至少一个区域中是否存在所述至少一个3D虚拟违禁品图像、弹出对话框确认在所述3D检查图像中包含至少一个3D虚拟违禁品图像、在界面上以文字提示确认在所述3D检查图像中包含至少一个3D虚拟违禁品图像、突出显示所述3D检查图像或2D检查图像中与所述选择相关联的那部分、对所述3D检查图像或2D检查图像中与所述选择相关联的部分进行标记、用特定的颜色或者图形填充所述3D检查图像或2D检查图像中与所述选择相关联的那部分。
根据一些实施例,根据所述检查数据计算所述被检查物体的至少一个空间特征参数,并且基于所述空间特征参数向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像。
根据一些实施例,所述空间特征参数与要插入的3D虚拟违禁品图像的位置、大小和方向中的至少之一有关。
根据一些实施例,对至少一个区域的选择包括对所显示的3D检查图像中的一部分在一个视角上的选择。
根据一些实施例,在所述3D检查图像的3D绘制过程中,记录表征被检查物体的点云信息,响应于所述选择以给出与所述3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈的步骤包括:通过分割来获得被检查物体中不同物体的点云信息簇序列;基于预定的基准从不同物体的点云信息簇序列中确定至少一个选中的区域;判断所述至少一个选中的区域中是否存在所述至少一个3D虚拟违禁品图像。
根据一些实施例,对至少一个区域的选择包括对所显示的3D检查图像中的一部分在多个不同视角上的选择。
根据一些实施例,对至少一个区域的选择包括对所显示的3D检查图像中的一部分在两个不同视角上的选择,所述两个不同视角基本上彼此正交,其中对所述检查数据进行透明区域剔除,获得所述检查数据中的非透明区域的层次包围盒,然后对所述层次包围盒渲染场景深度,获得正向面深度图和背向面深度图,响应于所述选择以给出与所述3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈的步骤包括:根据用户在第一视角所选择的区域在正向面深度图和背向面深度图中分别检索,生成第一包围盒;用生成的第一包围盒作为纹理载体进行光线投射;根据用户在与第一视角基本上正交的第二视角所选择的区域在正面向深度图和背向面深度图中分别检索,生成第二包围盒;在图像空间对第一包围盒和第二包围盒进行布尔交运算,获得三维空间中的标记区域,作为至少一个选中的区域;判断所述至少一个选中的区域中是否存在所述至少一个3D虚拟违禁品图像。
根据一些实施例,向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像的步骤包括:分割所述3D被检查图像,得到被检查物体的多个3D子图像;计算所述多个3D子图像之间的距离和位置;基于所计算的距离和位置插入3D虚拟违禁品图像。
根据一些实施例,向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像的步骤包括:基于体素的阻光度值,确定被检查物体的体数据中的透明部分和非透明部分;从体数据的不透明部分确定被检查物体箱包的位置和尺寸;在箱包范围内确定透明区域中的候选插入位置;根据预定的标准从候选插入位置中选择至少一个位置来插入至少一个3D违禁品图像。
根据一些实施例,向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像的步骤包括:剔除2D检查图像中的背景图像,得到2D前景图像;确定2D虚拟违禁品图像在2D前景图像中的2D插入位置;沿着所述2D插入位置的深度方向确定3D虚拟违禁品图像在3D检查图像中的位置;在所确定的位置插入至少一个3D虚拟违禁品图像。
根据一些实施例,所述的方法还包括向所述被检查物体的2D检查图像中插入与所述至少一个3D虚拟违禁品图像相应的2D虚拟违禁品图像。
在本公开的另一方面,提出了一种安检CT系统,包括:CT扫描设备,获得所述被检查物体的检查数据;存储器,存储所述检查数据;显示设备,显示所述被检查物体的3D检查图像和/或2D检查图像,所述3D检查图像是从所述检查数据得到的,所述2D检查图像是从所述3D检查图像得到的,或者是从所述检查数据得到的;数据处理器,向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像(Fictional Threat Image);输入装置,接收对包括所述3D虚拟违禁品图像在内的3D检查图像中的至少一个区域或者对包括与所述3D虚拟违禁品图像相应的2D虚拟违禁品图像在内的2D检查图像中的至少一个区域的选择;其中,所述数据处理器响应于所述选择以给出与所述3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈。
根据一些实施例,所述数据处理器根据所述检查数据计算所述被检查物体的至少一个空间特征参数,并且基于所述空间特征参数向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像。
根据一些实施例,所述空间特征参数与要插入的3D虚拟违禁品图像的位置、大小、和方向中的至少之一有关。
在本公开的一个方面,提出了一种在安检CT系统中标记嫌疑物的方法,包括步骤:对安检CT系统获得的CT数据进行透明区域剔除,获得所述CT数据中的非透明区域的层次包围盒;对所述层次包围盒渲染场景深度,获得正向面深度图和背向面深度图;使用用户在视线方向做出的标记在正向面深度图和背向面深度图中分别检索,生成第一包围盒;用生成的第一包围盒作为纹理载体进行光线投射;使用用户在与视线方向正交的方向做出的标记在正面向深度图和背向面深度图中分别检索,生成第二包围盒;在图像空间对第一包围盒和第二包围盒进行布尔交运算,获得三维空间中的标记区域;将三维空间的标记区域融合显示在CT数据中。
根据一些实施例,透明区域剔除的步骤包括:沿着视线方向对CT数据进行采样;利用基于阻光度的预积分查询表对每两个采样点之间的线段进行体绘制积分,得到该线段对应的不透明度;利用八叉树编码算法剖分剔除透明区域,得到不透明数据区域对应的层次包围盒。
根据一些实施例,渲染场景深度的步骤包括:剔除深度值比较中较大的片段,得到正向面深度图;剔除深度值比较中较小的片段,得到背向面深度图。
根据一些实施例,所述第一包围盒和第二包围盒均为任意方向包围盒。
根据一些实施例,基于空间约束的传递函数将三维空间的标记区域融合显示在CT数据中。
在本公开的另一方面,提出了一种在安检CT系统中标记嫌疑物的装置,包括:对安检CT系统获得的CT数据进行透明区域剔除,获得所述CT数据中的非透明区域的层次包围盒的装置;对所述层次包围盒渲染场景深度,获得正向面深度图和背向面深度图的装置;使用用户在视线方向做出的标记在正向面深度图和背向面深度图中分别检索,生成第一包围盒的装置;用生成的第一包围盒作为纹理载体进行光线投射的装置;使用用户在与视线方向正交的方向做出的标记在正面向深度图和背向面深度图中分别检索,生成第二包围盒的装置;在图像空间对第一包围盒和第二包围盒进行布尔交运算,获得三维空间中的标记区域的装置;将三维空间的标记区域融合显示在CT数据中的装置。
根据一些实施例,透明区域剔除的装置包括:沿着视线方向对CT数据进行采样的装置;利用查找表方法对每两个采样点之间的线段进行体绘制积分,得到对应线段的不透明度的装置;利用八叉树编码算法剖分剔除透明区域,得到层次包围盒的装置。
根据一些实施例,渲染场景深度的装置包括:剔除深度值比较中较大的片段,得到正向面深度图的装置;剔除深度值比较中较小的片段,得到背向面深度图的装置。
利用上述的技术方案,能够方便用户迅速标记CT图像中的嫌疑物并且给出是否包含虚拟危险品图像的反馈。
附图说明
为了更好地理解本公开,将根据以下附图对本公开进行详细描述:
图1示出了根据本公开实施例的安检CT系统的结构示意图;
图2示出了如图1所示的计算机数据处理器的结构框图;
图3示出了根据本公开实施方式的控制器的结构框图;
图4A是描述根据本公开一个实施例的安检系统中的方法的示意性流程图;
图4B是描述根据本公开一个实施例的在CT系统中标记嫌疑物的方法的流程图;
图5是描述八叉树剖分算法的示意图;
图6是在本公开实施例中利用八叉树剖分算法得到的层次包围盒的示意图;
图7是在本公开实施例中得到的正向面深度图的示意图;
图8是在本公开实施例中得到的背向面深度图的示意图;
图9是描述本公开实施例中使用的射线透射过程的示意图;
图10示出了在本公开实施例中用户勾画的标记的示意图;
图11示出了利用用户的标记进行正向面检索和背向面检索的过程的示意图;
图12示出了在本公开实施例中进行正向面检索和背向面检索所得到的结果的示意图;
图13示出了在本公开实施例中得到的标记点列的OBB包围盒的示意图;
图14示出了在前次标记的结果上更新获得新的光线投射范围的示意图;
图15示出了在本公开实施例中在正交的方向进行第二次标记的结果的示意图;
图16示出了在本公开实施例中使用第二次标记进行正向面检索和背向面检索得到的结果;
图17示出了在本公开实施例中得到的标记点列的OBB包围盒的示意图;
图18示出了在本公开实施例中使用的在图像空间中对两个物体进行布尔交运算的过程的示意图;
图19示出了在本公开实施中获得最终的嫌疑物的三维标记区域的示意图;以及
图20示出了在本公开实施例中将标记的嫌疑物融合显示在原始数据中的示意图。
具体实施方式
下面将详细描述本公开的具体实施例,应当注意,这里描述的实施例只用于举例说明,并不用于限制本公开。在以下描述中,为了提供对本公开的透彻理解,阐述了大量特定细节。然而,对于本领域普通技术人员显而易见的是:不必采用这些特定细节来实行本公开。在其他实例中,为了避免混淆本公开,未具体描述公知的结构、材料或方法。
在整个说明书中,对“一个实施例”、“实施例”、“一个示例”或“示例”的提及意味着:结合该实施例或示例描述的特定特征、结构或特性被包含在本公开至少一个实施例中。因此,在整个说明书的各个地方出现的短语“在一个实施例中”、“在实施例中”、“一个示例”或“示例”不一定都指同一实施例或示例。此外,可以以任何适当的组合和/或子组合将特定的特征、结构或特性组合在一个或多个实施例或示例中。此外,本领域普通技术人员应当理解,这里使用的术语“和/或”包括一个或多个相关列出的项目的任何和所有组合。
针对现有技术不能快速插入3D虚拟违禁品图像的问题,本公开的实施例提供了读取被检查物体的检查数据。向被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像(Fictional Threat Image),该3D检查图像是从检查数据得到的。接收对包括该3D虚拟违禁品图像在内的3D检查图像中的至少一个区域或者对包括与该3D虚拟违禁品图像相应的2D虚拟违禁品图像在内的2D检查图像中的至少一个区域的选择,该2D检查图像是从所述3D检查图像得到的,或者是从检查数据得到的。响应于该选择以给出与3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈。利用上述方案,能够方便用户迅速标记CT图像中的嫌疑物并且给出是否包含虚拟危险品图像的反馈。
图1是根据本公开实施方式的CT系统的结构示意图。如图1所示,根据本实施方式的CT设备包括:机架20、承载机构40、控制器50、计算机数据处理器60等。机架20包括发出检查用X射线的射线源10,诸如X光机,以及探测和采集装置30。承载机构40承载被检查行李70穿过机架20的射线源10与探测和采集装置30之间的扫描区域,同时机架20围绕被检查行李70的前进方向转动,从而由射线源10发出的射线能够透过被检查行李70,对被检查行李70进行CT扫描。
探测和采集装置30例如是具有整体模块结构的探测器及数据采集器,例如平板探测器,用于探测透射被检物品的射线,获得模拟信号,并且将模拟信号转换成数字信号,从而输出被检查行李70针对X射线的投影数据。控制器50用于控制整个系统的各个部分同步工作。计算机数据处理器60用来处理由数据采集器采集的数据,对数据进行处理并重建,输出结果。
如图1所示,射线源10置于可放置被检物体的一侧,探测和采集装置30置于被检查行李70的另一侧,包括探测器和数据采集器,用于获取被检查行李70的多角度投影数据。数据采集器中包括数据放大成形电路,它可工作于(电流)积分方式或脉冲(计数)方式。探测和采集装置30的数据输出电缆与控制器50和计算机数据处理器60连接,根据触发命令将采集的数据存储在计算机数据处理器60中。
图2示出了如图1所示的计算机数据处理器60的结构框图。如图2所示,数据采集器所采集的数据通过接口单元68和总线64存储在存储器61中。只读存储器(ROM)62中存储有计算机数据处理器的配置信息以及程序。随机存取存储器(RAM)63用于在处理器66工作过程中暂存各种数据。另外,存储器61中还存储有用于进行数据处理的计算机程序。内部总线64连接上述的存储器61、只读存储器62、随机存取存储器63、输入装置65、处理器66、显示装置67和接口单元68。
在用户通过诸如键盘和鼠标之类的输入装置65输入的操作命令后,计算机程序的指令代码命令处理器66执行预定的数据处理算法,在得到数据处理结果之后,将其显示在诸如LCD显示器之类的显示装置67上,或者直接以诸如打印之类硬拷贝的形式输出处理结果。
图3示出了根据本公开实施方式的控制器的结构框图。如图3所示,控制器50包括:控制单元51,根据来自计算机60的指令,来控制射线源10、承载机构40和探测和采集装置30;触发信号产生单元52,用于在控制单元的控制下产生用来触发射线源10、探测和采集装置30以及承载机构40的动作的触发命令;第一驱动设备53,它在根据触发信号产生单元52在控制单元51的控制下产生的触发命令驱动承载机构40传送被检查行李70;第二驱动设备54,它根据触发信号产生单元52在控制单元51的控制下产生的触发命令机架20旋转。探测和采集装置30 获得的投影数据存储在计算机60中进行CT断层图像重建,从而获得被检查行李70的断层图像数据。然后计算机60例如通过执行软件来从断层图像数据得到被检查行李70的至少一个视角下的DR图像,与重建的三维图像一起显示,方便判图员进行安全检查。根据其他实施例,上述的CT成像系统也可以是双能CT系统,也就是机架20的X射线源10能够发出高能和低能两种射线,探测和采集装置30探测到不同能量水平下的投影数据后,由计算机数据处理器60进行双能CT重建,得到被检查行李70的各个断层的等效原子序数和电子密度数据。
图4A是描述根据本公开一个实施例的安检系统中的方法的示意性流程图。
如图4A所示,在步骤S401,读取被检查物体的检查数据。
在步骤S402,向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像(Fictional Threat Image),所述3D检查图像是从所述检查数据得到的。例如数据处理器从虚拟危险品图像库中选择一幅或者多幅3D图像插入到被检查物体的3D检查图像中。
在步骤S403,接收对包括所述3D虚拟违禁品图像在内的3D检查图像中的至少一个区域或者对包括与所述3D虚拟违禁品图像相应的2D虚拟违禁品图像在内的2D检查图像中的至少一个区域的选择,所述2D检查图像是从所述3D检查图像得到的,或者是从所述检查数据得到的。例如用户操作输入装置在屏幕上显示的图像中勾选或者圈划某个区域。
在步骤S404,响应于所述选择以给出与所述3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈。
在一些实施例中,接收对包括所述3D虚拟违禁品图像在内的3D检查图像中的至少一个区域或者对包括与所述3D虚拟违禁品图像相应的2D虚拟违禁品图像在内的2D检查图像中的至少一个区域的选择的步骤包括:接收所述3D检查图像或2D检查图像中与所述选择相关联的部分的坐标位置。
在一些实施例中,响应于所述选择以给出与所述3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈的步骤包括以下至少之一:
判断所述选择的至少一个区域中是否存在所述至少一个3D虚拟违禁品图像、
弹出对话框确认在所述3D检查图像中包含至少一个3D虚拟违禁品图像、
在界面上以文字提示确认在所述3D检查图像中包含至少一个3D虚拟违禁品图像、
突出显示所述3D检查图像或2D检查图像中与所述选择相关联的那部分、
对所述3D检查图像或2D检查图像中与所述选择相关联的部分进行标记、
用特定的颜色或者图形填充所述3D检查图像或2D检查图像中与所述选择相关联的那部分。
例如,根据所述检查数据计算所述被检查物体的至少一个空间特征参数,并且基于所述空间特征参数向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像。在一些实施例中,所述空间特征参数与要插入的3D虚拟违禁品图像的位置、大小和方向中的至少之一有关。并且,其中对至少一个区域的选择包括对所显示的3D检查图像中的一部分在一个视角上的选择。例如,在所述3D检查图像的3D绘制过程中,记录表征被检查物体的点云信息,响应于所述选择以给出与所述3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈的步骤包括:通过分割来获得被检查物体中不同物体的点云信息簇序列;基于预定的基准从不同物体的点云信息簇序列中确定至少一个选中的区域;判断所述至少一个选中的区域中是否存在所述至少一个3D虚拟违禁品图像。
在其他实施例中对至少一个区域的选择包括对所显示的3D检查图像中的一部分在多个不同视角上的选择。例如,对至少一个区域的选择包括对所显示的3D检查图像中的一部分在两个不同视角上的选择,所述两个不同视角基本上彼此正交,其中对所述检查数据进行透明区域剔除,获得所述检查数据中的非透明区域的层次包围盒,然后对所述层次包围盒渲染场景深度,获得正向面深度图和背向面深度图,响应于所述选择以给出与所述3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈的步骤包括:根据用户在第一视角所选择的区域在正向面深度图和背向面深度图中分别检索,生成第一包围盒;用生成的第一包围盒作为纹理载体进行光线投射;根据用户在与第一视角基本上正交的第二视角所选择的区域在正面向深度图和背向面深度图中分别检索,生成第二包围盒;在图像空间对第一包围盒和第二包围盒进行布尔交运算,获得三维空间中的标记区域,作为至少一个选中的区域;判断所述至少一个选中的区域中是否存在所述至少一个3D虚拟违禁品图像。
在一些实施例中,向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像的步骤包括:分割所述3D被检查图像,得到被检查物体的多个3D子图像;计算所述多个3D子图像之间的距离和位置;基于所计算的距离和位置插入3D虚拟违禁品图像。
在另外的实施例中,向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像的步骤包括:基于体素的阻光度值,确定被检查物体的体数据中的透明部分和非透明部分;从体数据的不透明部分确定被检查物体箱包的位置和尺寸;在箱包范围内确定透明区域中的候选插入位置;根据预定的标准从候选插入位置中选择至少一个选中的区域;判断所述至少一个选中的区域中是否存在所述至少一个3D虚拟违禁品图像。
在另外的实施例中,向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像的步骤包括:剔除2D检查图像中的背景图像,得到2D前景图像;确定2D虚拟违禁品图像在2D前景图像中的2D插入位置;沿着所述2D插入位置的深度方向确定3D虚拟违禁品图像在3D检查图像中的位置;在所确定的位置插入至少一个3D虚拟违禁品图像。
上面描述的是插入3D虚拟危险品图像,但是本公开的一些实施例中,也可以向所述被检查物体的2D检查图像中插入与所述至少一个3D虚拟违禁品图像相应的2D虚拟违禁品图像。
此外,针对现有技术中的问题,本公开的一些实施例提出了一种快速标记嫌疑物的技术。在快速剔除数据的透明区域后,获得投射光线新的射入和射出位置,并记录为深度图。在此基础上,将二维的标记还原出其在体素空间的深度信息。将两次获得的几何体在图像空间进行布尔求交运算,最终获得三维空间中的标记区域。
例如,在一些实施例中,首先进行透明区域剔除,快速获得数据中非透明区域的紧密层次包围盒,然后渲染上述生成的层次包围盒,获得正、背向面深度图,此为经过调整的投射光线的射入和射出位置。接下来在当前视线方向进行第一次拾取,使用标记点列在正、背向面深度图中分别检索,生成例如OBB包围盒之类的包围盒。然后,根据上述生成的OBB包围盒,更新光线的投射范围,用户在自动 旋转到的正交视角下进行第二次拾取,生成新的OBB包围盒。将前两步获得的OBB包围盒,在图像空间中进行布尔交运算,获得最终的标记区域。最后,用基于空间约束的传递函数,将嫌疑区域融合显示于原数据中。使用本公开的标记方法,能够快速,准确地剔除CT数据中的透明区域,以一种友好的操作方式使用户迅速完成嫌疑区域标记任务。
图4B是描述根据本公开一个实施例的在CT系统中标记嫌疑物的方法的流程图。在CT设备获得CT数据后,首先提出CT数据中的透明区域。在快速剔除数据的透明区域后,将光线新的射入和射出位置记录为深度图。在拾取过程中,通过将二维的标记在深度图中查询,还原出其在体素空间的深度信息。将两次获得的几何体在图像空间进行布尔求交运算,最终获得三维空间中的标记区域。
在步骤S411,对安检CT系统获得的CT数据进行基于预积分的透明区域剔除,获得所述CT数据中的非透明区域的层次包围盒。
1)基于阻光度的预积分查询表生成
体绘制所处理的三维数据场是定义在三维空间的离散数据,整个数据场用一个离散的三维矩阵表示。三维空间中的每个小方格代表一个标量值,称为体素。在实际计算中,体素可以作为三维数据场的一个采样点,采样获得的标量值为s。对于数据场s(x),需先对体数据进行分类来指定颜色和衰减系数。通过引入传输函数(transfer function)来将体数据强度s映射为颜色I(s)和衰减系数τ(s)。在实现例中,此传输函数由双能CT的灰度数据和材料数据共同决定,又称为二维颜色表。
在体绘制中,在对三维标量场s(x)进行采样时,阻光度函数τ(s(x))的Nyquist采样频率等于τ(s)的最大Nyquist采样频率和标量值s(x)的Nyquist采样频率的乘积。由于衰减系数具有非线性特征,因此会造成Nyquist采样频率急剧增加的现象。为了解决由于传递函数的非线性特征造成采样频率急剧增加的问题,采用预积分方法。同时采用预积分方法后,可以快速地确定一个区块的CT数据是否透明。
预积分主要分为两步,第一步是沿视线方向对连续标量场s(x)进行采样,此时采样频率值不受传递函数的影响;第二步是通过查找表的方法对每两个采样点之间的线段进行体绘制积分。
在完成对s(x)的采样后,接着对每一个小线段进行体绘制积分,此积分过程通过查找表的方式完成。查找表共有三个参数,分别是线段起点,线段终点和线段长度。设定线段长度为常数,则进行查找表计算时只需考虑两个参数,线段起点和线段终点。
2)基于八叉树的透明区域剔除
八叉树是一种用于描述三维空间的树状数据结构。图5是描述八叉树剖分算法的示意图。八叉树的每个节点表示一个立方体的体积元素。每个节点有八个子节点,这八个子节点所表示的体积元素加在一起就等于父节点的体积。如图5所示八叉树包括八个节点ulf,urf,ulb,urb,llf,lrf,llb和lrb。使用八叉树编码算法剖分空间数据时,假设要表示的形体V可以放在一个充分大的立方体C内,则形体V关于立方体C的八叉树可以用以下的递归方法来定义:八叉树的每个结点与C的一个子立方体对应,根节点与C本身相对应,如果V=C,那么V的八叉树仅有树节点;如果V≠C,则将C等分为八个子立方体,每个子立方体与树根的一个子结点相对应。只要某个子立方体不是完全空白或完全为V所占据,则被八等分,从而对应的结点也就有了八个子结点。这样的递归判断、分割一直要进行到结点所对应的立方体或是完全空白,或是完全为V占据,或是其大小已是预先定义的子立方体大小。
根据设定的叶结点尺寸,对体数据逐层剖分,在遍历数据场时,统计叶结点对应子块内所有体素的最大值smax和最小值smin,子块对应的轴向包围盒和容积值。然后逐层向上合并结点,构建八叉树,八叉树剖分示意图如5所示。
根据本公开的实施例,遍历八叉树,递归地设置各层级结点的可见性状态。对于非叶结点,状态有透明、部分透明和不透明三种。它的状态由此结点包含的子结点状态决定。如果所有子结点为透明,则当前结点为透明;如果所有子结点为不透明,则当前结点为不透明;如果部分子结点透明,则当前结点为半透明。对于叶结点,状态仅有透明和不透明两种。叶结点可见性状态由不透明度查询获得。具体做法为,在构建八叉树时,每一子块的灰度最小和最大值(smin,smax)已经存储,使用上述的不透明度查询函数α(sf,sb)快速获得当前子块的不透明度α,则若α≥αε, 当前叶结点为不透明,其中αε为设定的不透明度阈值。如图6所示,为剔除掉透明区块后,剩余的不透明部分,其中大的长方体线框表示为原始数据尺寸。
在步骤S412,对所述层次包围盒渲染场景深度,获得正向面深度图和背向面深度图。在步骤S413,使用用户在视线方向做出的标记在正向面深度图和背向面深度图中分别检索,生成第一包围盒。图7是在本公开实施例中得到的正向面深度图的示意图。图8是在本公开实施例中得到的背向面深度图的示意图。
在体绘制中需要一个三维模型作为体纹理的载体,体纹理通过纹理坐标和模型进行对应,然后由视点向模型上的点引射线,该射线穿越模型空间等价于射线穿越了体纹理。这样将确定投射光线的射入和射出位置,转化为光线和体纹理载体的求交问题。如图7所示,对上述获得的层次包围盒渲染场景深度图,剔除深度值较大的片段获得正向面深度图,此时正向面深度图上的每个像素的颜色值都代表某个方向上离视点最近的点的距离。如图8所示,剔除深度值较小的片段,渲染场景深度图获得背向面深度图,背向面深度图上的每个像素的颜色值代表某个方向上离视点最远的点的距离。
图9是描述本公开实施例中使用的射线透射过程的示意图。光线投射的基本流程为:从图像的每一个像素沿固定方向发射一条光线,光线穿越整个图像序列,并在这个过程中,对图像序列进行采样和分类获取颜色值,同时依据光线吸收模型将颜色值进行累加,直至光线穿越整个图像序列,最后得到的颜色值是渲染图像的颜色。如图9所示的投射平面即为前述的“图像”。
光线投射最终获得的是一张二维图片,无法还原沿像素投射光线所经过体素的深度信息。为了完成在体素空间的区域拾取,如图10所示,我们在投射平面上进行嫌疑区域的勾取,标记结果如图11所示。为了将标记结果还原出体素空间的深度信息,将标记事件离散为点列,在正、背向面深度图中分别检索,获得标记区域在深度图上的投影结果。图12示出了利用用户的标记进行正向面检索和背向面检索的过程的示意图。此时,我们将屏幕图像上的一次二维标记操作,还原为体素空间的三维标记。
在完成一次标记后,此时嫌疑区域包括的范围仍然较大。为了对此嫌疑区域继续裁剪,需要计算体素空间的标记点列对应的OBB层次包围盒。
包围盒法的基本思想是使用简单的几何体来代替复杂的千奇百怪的几何体,先对物体的包围盒进行粗略检测,当包围盒相交时其包围的几何体才有可能相交;当包围盒不相交时,其包围的几何体一定不相交;这样可以排除大量不可能相交的几何体和几何部位,从而快速找到相交的几何部位。包围盒的种类有这样几类:沿坐标轴的包围盒AABB,包围球,沿任意方向包围盒OBB,和一种具有更广泛意义k-dop包围盒权衡各种包围盒的包络紧密度和计算代价,选用OBB包围盒来计算标记点列。OBB包围盒计算的关键是寻找最佳方向,并确定在该方向上包围对象的包围盒的最小尺寸。利用一次矩(均值)和二次矩(协方差矩阵)统计量来计算包围盒的位置和方向。
可以利用数值的方法解出协方差矩阵的特征向量并单位化。由于C是一个实对称矩阵,所以矩阵C的特征向量互相垂直,可以作为包围盒的方向轴。把将要包围的几何体的顶点向方向轴上投影,找出各方向轴的投影区间,各投影区间的长度就是所求包围盒相应的尺寸。图13示出了在本公开实施例中得到的标记点列的OBB包围盒的示意图。
在步骤S414,用生成的第一包围盒作为纹理载体进行光线投射;在步骤S415,使用用户在与视线方向正交的方向做出的标记在正面向深度图和背向面深度图中分别检索,生成第二包围盒。
1)更新光线的投射范围
如图14所示,在确定一个嫌疑区域范围后,将区域之外的部分剔除显示,使用生成的OBB包围盒作为新的体纹理载体进行光线投射。
2)旋转视角后的第二次拾取
图15示出了在本公开实施例中在正交的方向进行第二次标记的结果的示意图。图16示出了在本公开实施例中使用第二次标记进行正向面检索和背向面检索得到的结果。图17示出了在本公开实施例中得到的标记点列的OBB包围盒的示意图。在步骤S416,在图像空间对第一包围盒和第二包围盒进行布尔交运算,获得三维空间中的标记区域。图18示出了在本公开实施例中使用的在图像空间中对两个物体进行布尔交运算的过程的示意图。
为了快速获得两个OBB包围盒的相交区域,使用CSG方法进行计算。利用OpenGL对CSG模型进行渲染有两个方向,其一是基于物体空间(object space):将CSG模型直接转化为一组多边形的集合然后用OpenGL进行渲染,转换为B-rep模型是这一方案的典型方法,但模型转换造成效率低下并且不便于动态修改;其二是基于图像空间,即为本文采用的方法。
在图像空间中进行交运算,不对模型作任何修改,每一帧都进行动态运算以决定哪些表面应该显示,哪些表面应该被隐藏或剪裁。使用OpenGL的模板缓存(Stencil Buffer)来实现CSG中的交运算。借鉴了光线投射的思想,当实体表面投影到屏幕上时,计数其表面的像素与其他表面相交的次数
通过之前的操作,已经获得了两个立方体。求取二者的交集,实质是找出一个立方体表面在另一个立方体体积内部的部分。在求交过程中,任意指定的一个组件实体分前表面和后表面在各自的通道中分别渲染。具体在每一次渲染过程,首先将当前表面渲染进深度缓存(Depth Buffer)中,然后结合模板平面操作,使用其他实体来决定当前表面在其他实体内的部分。
在此处使用奇偶校验来判断某一点是否在给定实体空间内部,理论上通过奇偶校验可以判定空间任意点是否在给定体积的内部,但由于OpenGL深度缓存对每个像素点只能保存一个深度值,所以渲染实体A和B交集的奇偶校验过程为:首先找出A在B中的部分并绘制出来,然后找出B在A的部分并绘制。此时,A在B中的正面已被渲染。为获得B在A中的正面,首先重新渲染深度缓存中被B的正面覆盖的像素。这是因为经过之前的操作,A的所有部分均在深度缓存中,A在B外的部分可能遮蔽掉B本来可见的部分。在深度缓存中将B的深度值调整正确后,找到B的正面在A中的部分并渲染,与上述类似,从略。图19示出了在本公开实施中获得最终的嫌疑物的三维标记区域的示意图。
在步骤S417,将三维空间的标记区域融合显示在CT数据中。例如,在获得拾取的嫌疑区域后,需要将此嫌疑区域以一种较高的视觉优先级融合显示于原数据中。由图18可知,最终的嫌疑区域可能并不是一个规则的长方体形状,此处使用基于空间约束的传递函数。使用扫描线算法,根据体数据维度生成一个一维查询纹 理,每一个纹素储存的是对应的空间位置是否在嫌疑区域包围框,最终的融合绘制效果如图20所示。
此外,在CT数据中进行TIP插入,要保证插入的危险品图像位于箱包范围内,还要保证插入的图像不覆盖箱包的原有物品,此外算法的实时性要求也是需要考虑的重要因素。根据一些实施例,根据阻光度确定体数据的透明和不透明区域,在箱包的空白区域中拣选出容积满足要求的部分作为候选插入位置,根据该位置到视平面的距离和其周围物体的多少最终确定一个指定隐蔽程度的插入位置。
例如,首先进行基于阻光度的预积分查询表生成,用于体数据透明和非透明区域的快速确定。然后进行体数据不透明八叉树的构建,用于确定箱包在CT数据中的位置和尺寸。接下来。进行体数据透明八叉树的构建,透明八叉树仅统计数据区域内的透明部分,完全剔除不透明部分,由此获得箱包中可供插入的区域。拣选出透明区域中容积符合插入要求的部分,作为候选插入位置。根据指定的插入隐蔽程度,确定最终的插入位置。
上述实施例的方案能够快速地在CT数据中插入危险品图像并且此插入位置能够保证位于箱包内;插入的图像不覆盖箱包的原有物品;插入的隐蔽程度可由参数设定;能够保证算法的实时性。
以上的详细描述通过使用示意图、流程图和/或示例,已经阐述了在安检CT系统中标记的嫌疑物的方法和装置的众多实施例。在这种示意图、流程图和/或示例包含一个或多个功能和/或操作的情况下,本领域技术人员应理解,这种示意图、流程图或示例中的每一功能和/或操作可以通过各种结构、硬件、软件、固件或实质上它们的任意组合来单独和/或共同实现。在一个实施例中,本公开的实施例所述主题的若干部分可以通过专用集成电路(ASIC)、现场可编程门阵列(FPGA)、数字信号处理器(DSP)、或其他集成格式来实现。然而,本领域技术人员应认识到,这里所公开的实施例的一些方面在整体上或部分地可以等同地实现在集成电路中,实现为在一台或多台计算机上运行的一个或多个计算机程序(例如,实现为在一台或多台计算机系统上运行的一个或多个程序),实现为在一个或多个处理器上运行的一个或多个程序(例如,实现为在一个或多个微处理器上运行的一个或多个程序),实现为固件,或者实质上实现为上述方式的任意组合,并且 本领域技术人员根据本公开,将具备设计电路和/或写入软件和/或固件代码的能力。此外,本领域技术人员将认识到,本公开所述主题的机制能够作为多种形式的程序产品进行分发,并且无论实际用来执行分发的信号承载介质的具体类型如何,本公开所述主题的示例性实施例均适用。信号承载介质的示例包括但不限于:可记录型介质,如软盘、硬盘驱动器、紧致盘(CD)、数字通用盘(DVD)、数字磁带、计算机存储器等;以及传输型介质,如数字和/或模拟通信介质(例如,光纤光缆、波导、有线通信链路、无线通信链路等)。
虽然已参照几个典型实施例描述了本公开,但应当理解,所用的术语是说明和示例性、而非限制性的术语。由于本公开能够以多种形式具体实施而不脱离公开的精神或实质,所以应当理解,上述实施例不限于任何前述的细节,而应在随附权利要求所限定的精神和范围内广泛地解释,因此落入权利要求或其等效范围内的全部变化和改型都应为随附权利要求所涵盖。

Claims (16)

  1. 一种安检CT系统中的方法,包括步骤:
    读取被检查物体的检查数据;
    向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像(Fictional Threat Image),所述3D检查图像是从所述检查数据得到的;
    接收对包括所述3D虚拟违禁品图像在内的3D检查图像中的至少一个区域或者对包括与所述3D虚拟违禁品图像相应的2D虚拟违禁品图像在内的2D检查图像中的至少一个区域的选择,所述2D检查图像是从所述3D检查图像得到的,或者是从所述检查数据得到的;以及
    响应于所述选择以给出与所述3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈。
  2. 如权利要求1所述的方法,其中接收对包括所述3D虚拟违禁品图像在内的3D检查图像中的至少一个区域或者对包括与所述3D虚拟违禁品图像相应的2D虚拟违禁品图像在内的2D检查图像中的至少一个区域的选择的步骤包括:
    接收所述3D检查图像或2D检查图像中与所述选择相关联的部分的坐标位置。
  3. 如权利要求1所述的方法,其中响应于所述选择以给出与所述3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈的步骤包括以下至少之一:
    判断所述选择的至少一个区域中是否存在所述至少一个3D虚拟违禁品图像、
    弹出对话框确认在所述3D检查图像中包含至少一个3D虚拟违禁品图像、
    在界面上以文字提示确认在所述3D检查图像中包含至少一个3D虚拟违禁品图像、
    突出显示所述3D检查图像或2D检查图像中与所述选择相关联的那部分、
    对所述3D检查图像或2D检查图像中与所述选择相关联的部分进行标记、
    用特定的颜色或者图形填充所述3D检查图像或2D检查图像中与所述选择相关联的那部分。
  4. 如权利要求1所述的方法,其中根据所述检查数据计算所述被检查物体的至少一个空间特征参数,并且基于所述空间特征参数向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像。
  5. 如权利要求4所述的方法,其中所述空间特征参数与要插入的3D虚拟违禁品图像的位置、大小和方向中的至少之一有关。
  6. 如权利要求1所述的方法,其中对至少一个区域的选择包括对所显示的3D检查图像中的一部分在一个视角上的选择。
  7. 如权利要求6所述的方法,其中在所述3D检查图像的3D绘制过程中,记录表征被检查物体的点云信息,响应于所述选择以给出与所述3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈的步骤包括:
    通过分割来获得被检查物体中不同物体的点云信息簇序列;
    基于预定的基准从不同物体的点云信息簇序列中确定至少一个选中的区域;
    判断所述至少一个选中的区域中是否存在所述至少一个3D虚拟违禁品图像。
  8. 如权利要求1所述的方法,其中对至少一个区域的选择包括对所显示的3D检查图像中的一部分在多个不同视角上的选择。
  9. 如权利要求8所述的方法,其中对至少一个区域的选择包括对所显示的3D检查图像中的一部分在两个不同视角上的选择,所述两个不同视角基本上彼此正交,其中对所述检查数据进行透明区域剔除,获得所述检查数据中的非透明区域的层次包围盒,然后对所述层次包围盒渲染场景深度,获得正向面深度图和背向面深度图,响应于所述选择以给出与所述3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈的步骤包括:
    根据用户在第一视角所选择的区域在正向面深度图和背向面深度图中分别检索,生成第一包围盒;
    用生成的第一包围盒作为纹理载体进行光线投射;
    根据用户在与第一视角基本上正交的第二视角所选择的区域在正面向深度图和背向面深度图中分别检索,生成第二包围盒;
    在图像空间对第一包围盒和第二包围盒进行布尔交运算,获得三维空间中的标记区域,作为至少一个选中的区域;
    判断所述至少一个选中的区域中是否存在所述至少一个3D虚拟违禁品图像。
  10. 如权利要求1所述的方法,其中向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像的步骤包括:
    分割所述3D被检查图像,得到被检查物体的多个3D子图像;
    计算所述多个3D子图像之间的距离和位置;
    基于所计算的距离和位置插入3D虚拟违禁品图像。
  11. 如权利要求1所述的方法,其中向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像的步骤包括:
    基于体素的阻光度值,确定被检查物体的体数据中的透明部分和非透明部分;
    从体数据的不透明部分确定被检查物体箱包的位置和尺寸;
    在箱包范围内确定透明区域中的候选插入位置;
    根据预定的标准从候选插入位置中选择至少一个位置来插入至少一个3D违禁品图像。
  12. 如利要求1所述的方法,其中向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像的步骤包括:
    剔除2D检查图像中的背景图像,得到2D前景图像;
    确定2D虚拟违禁品图像在2D前景图像中的2D插入位置;
    沿着所述2D插入位置的深度方向确定3D虚拟违禁品图像在3D检查图像中的位置;
    在所确定的位置插入至少一个3D虚拟违禁品图像。
  13. 如权利要求1所述的方法,还包括向所述被检查物体的2D检查图像中插入与所述至少一个3D虚拟违禁品图像相应的2D虚拟违禁品图像。
  14. 一种安检CT系统,包括:
    CT扫描设备,获得所述被检查物体的检查数据;
    存储器,存储所述检查数据;
    显示设备,显示所述被检查物体的3D检查图像和/或2D检查图像,所述3D检查图像是从所述检查数据得到的,所述2D检查图像是从所述3D检查图像得到的,或者是从所述检查数据得到的;
    数据处理器,向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像(Fictional Threat Image);
    输入装置,接收对包括所述3D虚拟违禁品图像在内的3D检查图像中的至少一个区域或者对包括与所述3D虚拟违禁品图像相应的2D虚拟违禁品图像在内的2D检查图像中的至少一个区域的选择;
    其中,所述数据处理器响应于所述选择以给出与所述3D检查图像中包含至少一个3D虚拟违禁品图像相关的反馈。
  15. 如权利要求14所述的安检CT系统,其中所述数据处理器根据所述检查数据计算所述被检查物体的至少一个空间特征参数,并且基于所述空间特征参数向所述被检查物体的3D检查图像中插入至少一个3D虚拟违禁品图像。
  16. 如权利要求14所述的安检CT系统,其中所述空间特征参数与要插入的3D虚拟违禁品图像的位置、大小、和方向中的至少之一有关。
PCT/CN2015/097379 2014-06-25 2015-12-15 安检ct系统及其方法 WO2016095799A1 (zh)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201410290133 2014-06-25
CN201410795139.XA CN105223212B (zh) 2014-06-25 2014-12-18 安检ct系统及其方法
CN201410795139.X 2014-12-18

Publications (1)

Publication Number Publication Date
WO2016095799A1 true WO2016095799A1 (zh) 2016-06-23

Family

ID=53502458

Family Applications (2)

Application Number Title Priority Date Filing Date
PCT/CN2015/082202 WO2015196992A1 (zh) 2014-06-25 2015-06-24 安检ct系统及其方法
PCT/CN2015/097379 WO2016095799A1 (zh) 2014-06-25 2015-12-15 安检ct系统及其方法

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/082202 WO2015196992A1 (zh) 2014-06-25 2015-06-24 安检ct系统及其方法

Country Status (10)

Country Link
US (2) US9786070B2 (zh)
EP (1) EP2960869B1 (zh)
JP (1) JP6017631B2 (zh)
KR (1) KR101838839B1 (zh)
CN (3) CN105784731B (zh)
AU (1) AU2015281530B2 (zh)
HK (1) HK1218157A1 (zh)
PL (1) PL2960869T3 (zh)
RU (1) RU2599277C1 (zh)
WO (2) WO2015196992A1 (zh)

Families Citing this family (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105784731B (zh) * 2014-06-25 2019-02-22 同方威视技术股份有限公司 一种定位三维ct图像中的目标的方法和安检系统
JP6763301B2 (ja) * 2014-09-02 2020-09-30 株式会社ニコン 検査装置、検査方法、検査処理プログラムおよび構造物の製造方法
WO2016095798A1 (zh) * 2014-12-18 2016-06-23 同方威视技术股份有限公司 一种定位三维ct图像中的目标的方法和安检系统
CN105787919B (zh) 2014-12-23 2019-04-30 清华大学 一种安检ct三维图像的操作方法和装置
KR102377626B1 (ko) * 2015-03-27 2022-03-24 주식회사바텍 엑스선 영상 처리 시스템 및 그 사용 방법
EP3156942A1 (en) * 2015-10-16 2017-04-19 Thomson Licensing Scene labeling of rgb-d data with interactive option
EP3223247A1 (en) * 2016-03-24 2017-09-27 Ecole Nationale de l'Aviation Civile Boolean object management in 3d display
CA3022215C (en) * 2016-05-06 2024-03-26 L3 Security & Detection Systems, Inc. Systems and methods for generating projection images
CN106296535A (zh) * 2016-08-02 2017-01-04 重庆微标科技股份有限公司 实现旅客行李安检信息与身份信息关联追溯的方法和系统
US10726608B2 (en) * 2016-11-23 2020-07-28 3D Systems, Inc. System and method for real-time rendering of complex data
WO2018123801A1 (ja) * 2016-12-28 2018-07-05 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 三次元モデル配信方法、三次元モデル受信方法、三次元モデル配信装置及び三次元モデル受信装置
JP6969149B2 (ja) * 2017-05-10 2021-11-24 富士フイルムビジネスイノベーション株式会社 三次元形状データの編集装置、及び三次元形状データの編集プログラム
WO2019028721A1 (zh) * 2017-08-10 2019-02-14 哈尔滨工业大学 用于物品识别的方法、装置、设备和安检系统
CN107833209B (zh) * 2017-10-27 2020-05-26 浙江大华技术股份有限公司 一种x光图像检测方法、装置、电子设备及存储介质
CN108051459A (zh) * 2017-12-07 2018-05-18 齐鲁工业大学 一种显微ct多样品测试处理方法
CN108295467B (zh) * 2018-02-06 2021-06-22 网易(杭州)网络有限公司 图像的呈现方法、装置及存储介质、处理器和终端
KR102026085B1 (ko) * 2018-02-22 2019-09-27 서울대학교산학협력단 Ct 데이터 표면 완성 방법 및 그 장치
CN108459801B (zh) * 2018-02-28 2020-09-11 北京航星机器制造有限公司 一种高亮显示三维ct图像中的目标的方法和系统
CN110210368B (zh) * 2019-05-28 2023-07-18 东北大学 一种基于安检图像的危险品图像注入方法
CN113906474A (zh) 2019-06-28 2022-01-07 西门子(中国)有限公司 点云模型的切割方法、装置和系统
CN110579496B (zh) * 2019-08-15 2024-03-29 公安部第一研究所 一种安检ct系统的危险品图像快速插入方法及系统
WO2021131103A1 (ja) * 2019-12-24 2021-07-01 ヌヴォトンテクノロジージャパン株式会社 距離画像処理装置及び距離画像処理方法
CN114930813B (zh) * 2020-01-08 2024-03-26 Lg电子株式会社 点云数据发送装置、点云数据发送方法、点云数据接收装置和点云数据接收方法
CN111551569B (zh) * 2020-04-28 2021-01-08 合肥格泉智能科技有限公司 一种基于x光机国际快件图像查验系统
KR102227531B1 (ko) * 2020-07-06 2021-03-15 주식회사 딥노이드 X-ray 보안 장치에 대한 이미지 처리 장치 및 방법
CN112581467B (zh) * 2020-12-25 2023-11-07 北京航星机器制造有限公司 一种基于疑似危险品评价的智能安检方法
US20220230366A1 (en) * 2021-01-20 2022-07-21 Battelle Memorial Institute X-ray baggage and parcel inspection system with efficient third-party image processing
CA3208992A1 (en) * 2021-02-03 2022-09-01 Battelle Memorial Institute Techniques for generating synthetic three-dimensional representations of threats disposed within a volume of a bag
DE102021202511A1 (de) 2021-03-15 2022-09-15 Smiths Detection Germany Gmbh Verfahren zum Erzeugen von dreidimensionalen Trainingsdaten für eine Erkennungsvorrichtung zum Erkennen von Alarmobjekten in Gepäckstücken
CN112907670B (zh) * 2021-03-31 2022-10-14 北京航星机器制造有限公司 一种基于剖面图的目标物定位和标注方法及装置
CN112950664B (zh) * 2021-03-31 2023-04-07 北京航星机器制造有限公司 一种基于滑动剖面的目标物定位和标注方法及装置
CN113781426B (zh) * 2021-09-07 2024-02-13 海深智能科技(上海)有限公司 一种识别液体成分的智能安检方法
CN116453063B (zh) * 2023-06-12 2023-09-05 中广核贝谷科技有限公司 基于dr图像与投影图融合的目标检测识别方法及系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030023592A1 (en) * 2001-07-27 2003-01-30 Rapiscan Security Products (Usa), Inc. Method and system for certifying operators of x-ray inspection systems
US7012256B1 (en) * 2001-12-21 2006-03-14 National Recovery Technologies, Inc. Computer assisted bag screening system
WO2007131348A1 (en) * 2006-05-11 2007-11-22 Optosecurity Inc. Method and apparatus for providing threat image projection (tip) in a luggage screening system, and luggage screening system implementing same
US20100266204A1 (en) * 2009-04-17 2010-10-21 Reveal Imaging Technologies, Inc. Method and system for threat image projection
CN101933046A (zh) * 2008-01-25 2010-12-29 模拟逻辑有限公司 图像组合
WO2015196992A1 (zh) * 2014-06-25 2015-12-30 同方威视技术股份有限公司 安检ct系统及其方法

Family Cites Families (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001076184A (ja) * 1999-09-03 2001-03-23 Ge Yokogawa Medical Systems Ltd 3次元表示方法および3次元表示装置
US6721387B1 (en) * 2001-06-13 2004-04-13 Analogic Corporation Method of and system for reducing metal artifacts in images generated by x-ray scanning devices
GB0525593D0 (en) * 2005-12-16 2006-01-25 Cxr Ltd X-ray tomography inspection systems
US20050281464A1 (en) * 2004-06-17 2005-12-22 Fuji Photo Film Co., Ltd. Particular image area partitioning apparatus and method, and program for causing computer to perform particular image area partitioning processing
US7974676B2 (en) * 2004-08-17 2011-07-05 Alan Penn & Associates, Inc. Method and system for discriminating image representations of classes of objects
CN100495439C (zh) * 2005-11-21 2009-06-03 清华大学 采用直线轨迹扫描的图像重建系统和方法
US7764819B2 (en) * 2006-01-25 2010-07-27 Siemens Medical Solutions Usa, Inc. System and method for local pulmonary structure classification for computer-aided nodule detection
CN101071110B (zh) * 2006-05-08 2011-05-11 清华大学 一种基于螺旋扫描立体成像的货物安全检查方法
US20080123895A1 (en) * 2006-11-27 2008-05-29 Todd Gable Method and system for fast volume cropping of three-dimensional image data
US20080175456A1 (en) * 2007-01-18 2008-07-24 Dimitrios Ioannou Methods for explosive detection with multiresolution computed tomography data
US20080253653A1 (en) * 2007-04-12 2008-10-16 Todd Gable Systems and methods for improving visibility of scanned images
CN103064125B (zh) * 2007-06-21 2016-01-20 瑞皮斯坎系统股份有限公司 用于提高受指引的人员筛查的系统和方法
DE102007042144A1 (de) * 2007-09-05 2009-03-12 Smiths Heimann Gmbh Verfahren zur Verbesserung der Materialerkennbarkeit in einer Röntgenprüfanlage und Röntgenprüfanlage
US7978191B2 (en) * 2007-09-24 2011-07-12 Dolphin Imaging Systems, Llc System and method for locating anatomies of interest in a 3D volume
CN201145672Y (zh) * 2007-10-30 2008-11-05 清华大学 检查系统、ct装置以及探测装置
EP2309257A1 (en) * 2008-03-27 2011-04-13 Analogic Corporation Method of and system for three-dimensional workstation for security and medical applications
US8600149B2 (en) * 2008-08-25 2013-12-03 Telesecurity Sciences, Inc. Method and system for electronic inspection of baggage and cargo
GB0817487D0 (en) 2008-09-24 2008-10-29 Durham Scient Crystals Ltd Radiographic data interpretation
JP4847568B2 (ja) * 2008-10-24 2011-12-28 キヤノン株式会社 X線撮像装置およびx線撮像方法
US8885938B2 (en) 2008-10-30 2014-11-11 Analogic Corporation Detecting concealed threats
US8180139B2 (en) * 2009-03-26 2012-05-15 Morpho Detection, Inc. Method and system for inspection of containers
JP4471032B1 (ja) * 2009-03-27 2010-06-02 システム・プロダクト株式会社 X線画像合成装置、方法及びプログラム
WO2010141101A1 (en) * 2009-06-05 2010-12-09 Sentinel Scanning Corporation Transportation container inspection system and method
EP2488105A4 (en) 2009-10-13 2014-05-07 Agency Science Tech & Res METHOD AND SYSTEM ADAPTED TO SEGMENT AN OBJECT IN AN IMAGE (A LIVER IN THE OCCURRENCE)
CN102222352B (zh) * 2010-04-16 2014-07-23 株式会社日立医疗器械 图像处理方法和图像处理装置
CN101943761B (zh) * 2010-09-12 2012-09-05 上海英迈吉东影图像设备有限公司 一种x射线检测方法
US9042661B2 (en) * 2010-09-30 2015-05-26 Analogic Corporation Object classification using two-dimensional projection
CN202221578U (zh) * 2010-10-26 2012-05-16 同方威视技术股份有限公司 一种自适应反馈的图像安检纠偏系统
CN102567960B (zh) * 2010-12-31 2017-03-01 同方威视技术股份有限公司 一种用于安全检查系统的图像增强方法
EP2689394A1 (en) * 2011-03-22 2014-01-29 Analogic Corporation Compound object separation
ES2665535T3 (es) 2012-03-20 2018-04-26 Siemens Corporation Visualización de equipajes y desempaquetado virtual
CN103713329B (zh) * 2012-09-29 2016-12-21 清华大学 Ct成像中定位物体的方法以及设备
CN103900503B (zh) 2012-12-27 2016-12-28 清华大学 提取形状特征的方法、安全检查方法以及设备
CN103901489B (zh) 2012-12-27 2017-07-21 清华大学 检查物体的方法、显示方法和设备
JP5684351B2 (ja) * 2013-09-17 2015-03-11 富士フイルム株式会社 画像処理装置および画像処理方法、並びに、画像処理プログラム
JP5800039B2 (ja) * 2014-01-22 2015-10-28 三菱プレシジョン株式会社 生体データモデル作成方法及びその装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030023592A1 (en) * 2001-07-27 2003-01-30 Rapiscan Security Products (Usa), Inc. Method and system for certifying operators of x-ray inspection systems
US7012256B1 (en) * 2001-12-21 2006-03-14 National Recovery Technologies, Inc. Computer assisted bag screening system
WO2007131348A1 (en) * 2006-05-11 2007-11-22 Optosecurity Inc. Method and apparatus for providing threat image projection (tip) in a luggage screening system, and luggage screening system implementing same
CN101933046A (zh) * 2008-01-25 2010-12-29 模拟逻辑有限公司 图像组合
US20100266204A1 (en) * 2009-04-17 2010-10-21 Reveal Imaging Technologies, Inc. Method and system for threat image projection
WO2015196992A1 (zh) * 2014-06-25 2015-12-30 同方威视技术股份有限公司 安检ct系统及其方法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MEGHERBI, NAJLA ET AL.: "Fully Automatic 3D Threat Image Projection: Application to Densely Cluttered 3D Computed Tomography Baggage Images", IMAGE PROCESSING THEORY, TOOLS AND APPLICATIONS, 31 December 2012 (2012-12-31) *
YILDIZ, YESNA O. ET AL.: "3-D Threat Image Projection", PROC. OF SPIE- IS &T ELECTRONIC IMAGING: THREE-DIMENSIONAL IMAGE CAPTURE AND APPLICATIONS 2008, vol. 6805, 31 December 2008 (2008-12-31) *

Also Published As

Publication number Publication date
PL2960869T3 (pl) 2019-04-30
AU2015281530B2 (en) 2017-07-20
EP2960869A2 (en) 2015-12-30
CN105785462A (zh) 2016-07-20
CN105785462B (zh) 2019-02-22
AU2015281530A1 (en) 2016-09-22
US9786070B2 (en) 2017-10-10
US20170276823A1 (en) 2017-09-28
KR101838839B1 (ko) 2018-03-14
CN105784731B (zh) 2019-02-22
US10297050B2 (en) 2019-05-21
EP2960869B1 (en) 2018-10-03
EP2960869A3 (en) 2016-04-06
HK1218157A1 (zh) 2017-02-03
JP2016008966A (ja) 2016-01-18
WO2015196992A1 (zh) 2015-12-30
CN105784731A (zh) 2016-07-20
JP6017631B2 (ja) 2016-11-02
CN105223212B (zh) 2019-02-22
US20160012647A1 (en) 2016-01-14
KR20160132096A (ko) 2016-11-16
RU2599277C1 (ru) 2016-10-10
CN105223212A (zh) 2016-01-06

Similar Documents

Publication Publication Date Title
WO2016095799A1 (zh) 安检ct系统及其方法
CN103901489B (zh) 检查物体的方法、显示方法和设备
CN103903303B (zh) 三维模型创建方法和设备
CN101604458A (zh) 用于显示预先绘制的计算机辅助诊断结果的方法
US9978184B2 (en) Methods and apparatuses for marking target in 3D image
CN103900503A (zh) 提取形状特征的方法、安全检查方法以及设备
WO2015062352A1 (zh) 立体成像系统及其方法
WO2016095798A1 (zh) 一种定位三维ct图像中的目标的方法和安检系统
KR102265248B1 (ko) 3-d 이미징에 의해 장면의 오브젝트들의 구별 및 식별을 위한 방법
WO2016101829A1 (zh) 一种安检ct三维图像的操作方法和装置
WO2016095776A1 (zh) 一种定位三维ct图像中的目标的方法和安检ct系统
Tan et al. Design of 3D visualization system based on VTK utilizing marching cubes and ray casting algorithm
CN111009033A (zh) 一种基于OpenGL的病灶区域的可视化方法和系统
Kaczmarek et al. 3D Scanning of Semitransparent Amber with and without Inclusions
Westerteiger Virtual Reality Methods for Research in the Geosciences
Min et al. OctoMap-RT: Fast Probabilistic Volumetric Mapping Using Ray-Tracing GPUs
Stotko et al. Improved 3D reconstruction using combined weighting strategies
Lei et al. Software Implementation of Augmented Reality Display System for Radioactive Sources and Surface of Nuclear Waste Container
Teller Interactive Ray-Traced Scene Editing Using Ray Segment Trees
Yücel Ray-disc intersection based ray tracing for point clouds

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15869306

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15869306

Country of ref document: EP

Kind code of ref document: A1