US20090003651A1 - Object segmentation recognition - Google Patents
Object segmentation recognition Download PDFInfo
- Publication number
- US20090003651A1 US20090003651A1 US12/129,036 US12903608A US2009003651A1 US 20090003651 A1 US20090003651 A1 US 20090003651A1 US 12903608 A US12903608 A US 12903608A US 2009003651 A1 US2009003651 A1 US 2009003651A1
- Authority
- US
- United States
- Prior art keywords
- image
- atomic number
- interest
- radiographic images
- images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/02—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material
- G01N23/06—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by transmitting the radiation through the material and measuring the absorption
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/217—Validation; Performance evaluation; Active pattern learning techniques
- G06F18/2178—Validation; Performance evaluation; Active pattern learning techniques based on feedback of a supervisor
Definitions
- Embodiments of the present invention relate generally to image segmentation and, more particularly, to computer systems and methods for automatic image segmentation of radiographic images.
- Image segmentation the process of separating objects of interest from the background (or from other objects) in an image, is typically a difficult task for a computer to perform. If an image scene is simple and the contrast between objects in the scene and the background is high, then the task may be somewhat easier. However, if an image scene is cluttered and the contrast between objects in the scene and the background (or other objects) is low, image segmentation can be a particularly difficult problem. For example, in a radiographic image of a three-dimensional object such as a cargo container there can be numerous layers of objects and contrast may be low between the objects and the background.
- radiographic images of objects having layers may also present a need to segment the image in two ways: in the x-y plane (i.e., the plane the image was produced on) and by layer of depth in order to correct for layer effects such as overlapping.
- Embodiments of the present invention can be used in an imaging system, such as a nuclear material detection system, that includes a capability of producing images using different energy levels.
- Each energy level provides a different imaging characteristic such as energy penetration of the object being scanned.
- Different images produced using different energy levels can be used in conjunction with each other to better identify layers within the object being scanned.
- one exemplary embodiment can include a system for segmenting radiographic images of a cargo container.
- the system can include an object segmentation recognition module adapted to perform a series of functions.
- the functions can include receiving a plurality of radiographic images of a cargo container, each image generated using a different energy level and segmenting each of the radiographic images using one or more segmentation modules to generate segmentation data representing one or more image segments.
- the functions can also include identifying image layers within the radiographic images using a plurality of layer analysis modules by providing the plurality of radiographic images and the segmentation data as input to the layer analysis modules, and determining adjusted atomic number values for an atomic number image based on the image layers.
- the functions can include adjusting the atomic number image based on the adjusted atomic number values for the regions of interest to generate an adjusted atomic number image and identifying regions of interest within the adjusted atomic number image based on an image characteristic.
- the functions can also include providing coordinates of each region of interest and the adjusted atomic number image as output.
- Another embodiment includes a method for segmenting radiographic images.
- the method can include providing a plurality of radiographic images and providing a pixel level estimated atomic number image.
- the method can also include segmenting the radiographic images using one or more segmentation modules to generate segmentation data and identifying image layers within each of the radiographic images.
- the method can also include analyzing the image layers using the plurality of radiographic images and the segmentation data in order to determine corrected atomic number values for objects in the images, and correcting the pixel level atomic number image based on the corrected atomic number values to generate an object level atomic number image.
- the method can include identifying regions of interest within the object level atomic number image based on an image characteristic, and outputting coordinates of each region of interest and the object level atomic number image as output.
- the apparatus can include means for receiving a radiographic image and means for segmenting each of the radiographic images using one or more segmentation modules to generate segmentation data.
- the apparatus can also include means for identifying image layers to produce image layer data and means for identifying regions of interest based on the segmentation data and image layer data.
- the apparatus can also include means for analyzing the image layers in order to determine corrected atomic number values for the regions of interest and means for correcting an atomic number image based on the corrected atomic number values for the regions of interest.
- the apparatus can also include means for outputting region of interest coordinates and the corrected atomic number image as output.
- FIG. 1 is a block diagram of an exemplary object segmentation recognition processor showing inputs and outputs;
- FIG. 2 is a block diagram of an exemplary object segmentation recognition processor showing an exemplary OSR processor in detail;
- FIG. 3 is a flowchart showing an exemplary method for image segmentation
- FIG. 4 is a block diagram of an exemplary object segmentation recognition apparatus showing data flow and processing modules.
- FIG. 1 shows a block diagram of an exemplary object segmentation recognition processor showing inputs and outputs.
- an object segmentation recognition (OSR) processor 102 is shown receiving one or more images 104 as input and providing region of interest (ROI) or object coordinates 106 as output.
- ROI region of interest
- the images 104 provided or obtained as input to the OSR processor 102 can include radiographic images or other images.
- the images 104 can include radiographic images of a cargo conveyance such as a cargo container.
- the images 104 can include one or more images, for example four images can be provided with each image being generated using a different radiographic energy level.
- the images 104 can include radiographic images or other images derived from radiographic images, such as, for example, an atomic number image representing estimated atomic numbers associated with radiographic images.
- the OSR processor 102 can obtain, request or receive the images 104 via a wired or wireless connection, such as a network (e.g., LAN, WAN, wireless network, Internet or the like) or direct connection within a system.
- the OSR processor 102 can also receive the images 104 via a software connection (e.g., procedure call, standard object access protocol, remote procedure call, or the like).
- a software connection e.g., procedure call, standard object access protocol, remote procedure call, or the like.
- any known or later developed wired, wireless or software connection suitable for transmitting data can be used to supply the images 104 to the OSR processor 102 .
- the OSR processor 102 can be requested to segment images by another process or system, or can request images for segmenting from another process or system. If the images 104 include more than one image, the images can be registered prior to being sent for segmentation.
- the OSR processor 102 processes the images 104 to segment the images 104 and identify objects within the images 104 .
- the OSR processor 102 can also extract or identify layers (or estimated layers) within the images in order to help segment the images more accurately.
- the layer information can also be used to correct or adjust estimated atomic numbers in an atomic number image or map.
- the atomic number image or map can include a representation of estimated atomic numbers determined from the images 104 .
- ROIs regions of interest
- the ROIs can be determined based on an image characteristic such as estimated atomic number of the ROI (or object), shape of the ROI, position or location of the ROI, or the like.
- the OSR processor 102 can provide ROI/object coordinates 106 as output.
- the ROI/object coordinates 106 can be associated with the input images 104 or an atomic number image.
- the output ROI/object coordinates 106 can be outputted via a wired or wireless connection, such as a network (e.g., LAN, WAN, Internet or the like) or direct connection within a system.
- the output ROI/object coordinates 106 can be outputted via a software connection (e.g., response to a procedure call, standard object access protocol, remote procedure call, or the like).
- FIG. 2 is a block diagram of an exemplary object segmentation recognition processor showing an exemplary OSR processor in detail.
- the OSR processor 102 includes a segment processing section 202 having a connected region analysis module 204 , an edge analysis module 206 , a ratio layer analysis module 208 and a blind source separation (BSS) layer analysis module 210 .
- the OSR processor 102 also includes an object ROI section 212 having a layer analysis and segment association module 214 and an object ROI determination module 216 .
- the segment processing section receives the images 104 .
- the images 104 can be processed using one or more image segmentation modules (e.g., the connected region analysis module 204 , the edge analysis module 206 , or a combination of the above).
- image segmentation modules e.g., the connected region analysis module 204 , the edge analysis module 206 , or a combination of the above.
- the segmentation modules shown are for illustration purposes and that any known or later developed image segmentation processes can be used.
- the selection of the number and type of image segmentation modules employed in the OSR processor 102 may depend on a contemplated use of an embodiment and the selection may be guided by a number of factors including, but not limited to, type of materials being scanned, configuration of the scanning system and objects being scanned, desired performance characteristics, time available for processing, or the like.
- the individual segmentation processes, layer analysis processes, or both may be performed sequentially or in parallel or a combination of the above and in any suitable order.
- the resulting image segment data can be provided to the object ROI section 212 .
- the layers and segments of the image segment data are analyzed using layer analysis and segment association module 214 and combined or associated to produce segment-layer data that contains information about objects and layers within the images 104 .
- the images 104 can be processed by one or more layer analysis modules (e.g., the ratio layer analysis module 208 , the BSS layer analysis module 210 , or a combination of the above) within the layer analysis and segment association module 214 .
- the segment-layer data can be in the form of an atomic number image that represents a composite of the images 104 and has been adjusted or corrected based on layers and segments to provide an image suitable for identification of ROIs.
- the segment-layer data can also be represented in any form suitable for transmitting the information that may be needed to analyze the images 104 .
- the segment-layer data is then provided to the object ROI determination module 216 for analysis and identification of ROIs.
- the object ROI determination module 216 can use one or more image characteristics to identify ROIs within the images 104 or the segment-layer data.
- Image characteristics can include an estimated atomic number for a portion of the image (e.g., a pixel, segment, object, region or the like), a shape of a segment or object within the image, or a position or location of an object or segment. In general, any image characteristic that is suitable for identifying an ROI can be used.
- coordinate data ( 106 ) representing each ROI can be provided as output.
- the output can be provided as described above in connection with reference number 106 of FIG. 1 .
- segment-layer data or an adjusted or corrected atomic number image can be provided in addition to, or as a substitute for, the ROI coordinates.
- FIG. 3 is a flowchart showing an exemplary computer implemented method for image segmentation. Processing begins at step 302 and continues to step 304 .
- one or more radiographic images are obtained. These images can be provided by an imaging system (e.g., an x-ray, magnetic resonance imaging device, computerized tomography device, or the like). In general, any imaging device suitable for generating images that may require segmenting can be used. Processing continues to step 306 .
- an imaging system e.g., an x-ray, magnetic resonance imaging device, computerized tomography device, or the like. In general, any imaging device suitable for generating images that may require segmenting can be used. Processing continues to step 306 .
- the radiographic images are segmented.
- the segmentation can be performed using one or more image segmentation processes. Examples of segmentation methods include modules or processes for segmentation based on clustering, histograms, edge detection, region growing, level set, graph partitioning, watershed, model based, and multi-scale. Processing continues to step 308 .
- any layers present in the images are determined.
- the layers can be determined using one or more layer extraction or identification processes. For example, a ratio layer analysis process and a BSS layer analysis process can be used together to identify layers in the images. For example, using the layer analysis module 214 mentioned above.
- a goal of layer identification and extraction is to remove overlapping effects which may be present. By removing overlapping effects, the true gray level of a material can be determined. Using the true gray level, a material's effective atomic number (and, optionally, material density) can be determined. Using the effective atomic number the composition of the material can help in determining illicit materials, such as special nuclear materials can be detected automatically.
- the ratio method of layer identification and overlap effect removal is known in the art as applied to dual energy and is described in “The Utility of X-ray Dual-energy Transmission and Scatter Technologies for Illicit Material Detection,” a published Ph.D. Dissertation by Lu Qiang, Virginia Polytechnic Institute and State University, Blacksburg, Va., 1999, which is incorporated herein by reference in its entirety.
- the ratio method provides a process whereby a computer can solve for one image layer and remove any overlapping effects of another layer.
- regions that overlap or may be obscured can be separated into their constituent layers and a true gray level can be determined for each layer.
- the true gray level can be used to more accurately determine an estimated atomic number an/or material type.
- Blind source separation is a technique known in the art, and refers generally to the separation of a set of signals from a set of mixed signals, without the aid of information (or with very little information) about the source signals or the mixing process. However, if information about how the signals were mixed (e.g., a mixing matrix) can be estimated, it can then be used to determine an un-mixing matrix which can be applied to separate the components within a mixed signal.
- a mixing matrix e.g., a mixing matrix
- the BSS method may be limited by the amount of independence between materials within the mixture.
- Several techniques exist for estimating the mixing matrix some include using an unsupervised learning process. The process can include incrementally changing and weighting coefficients of the mixing matrix and evaluating the mixing matrix until optimal conditions are met. Once the mixing matrix is estimated, un-mixing coefficients can be computed. Examples of some BSS techniques include projection pursuit gradient ascent, projection pursuit stepwise separation, ICA gradient ascent, and complexity pursuit gradient ascent. In general, an iterative hill climbing or other type of optimization process can be used to estimate the mixing matrix and determine an optimal matrix. Also, contemplated or desired performance levels may require development of custom algorithms that can be tuned to a specific empirical terrain provided by the mixing and un-mixing matrices. Once the layers are identified and overlapping effects are removed, processing continues to step 310 .
- step 310 segments that have been identified are associated with any layers that have been determined or identified in step 308 . Associating segments with layers can help to remove any overlapping effects and also can improve the ability to determine a true gray value for a segment. Processing continues to step 312 .
- ROIs are determined.
- the ROIs can be determined based on an image characteristic as described above. Processing continues to step 314 .
- a gray level atomic number image is optionally adjusted to reflect the corrections or adjustments provided by the layer determination.
- the adjustments or corrections can include changes related to removal of overlap effects or other changes. Processing continues to step 316 .
- step 316 the ROI coordinates and, optionally, the adjusted or corrected gray level image are provided as output to an operator or another system.
- the output can be in a standard format or in a proprietary format.
- steps 304 - 316 can be repeated in whole or in part to perform a contemplated image segmentation process.
- FIG. 4 is a block diagram of an exemplary object segmentation recognition apparatus showing data flow and processing modules.
- four gray scale radiographic images 402 - 408 ), each generated using a different energy level, are provided to an effective Z-value determination module 410 .
- the effective Z-value determination module determines a pixel-level Z-value gray scale image 412 .
- the pixel-level Z-value gray scale image 412 can be provided to an image segmentation and layer analysis module 414 .
- the segmentation and layer analysis module 414 segments the image and analyzes layers, as described above, to generate a layer corrected image representing true gray values, ROI coordinates, or both.
- modules, processes, systems, and sections described above can be implemented in hardware, software, or both. Also, the modules, processes systems, and sections can be implemented as a single processor or as a distributed processor. Further, it should be appreciated that the steps mentioned above may be performed on a single or distributed processor. Also, the processes, modules, and sub-modules described in the various figures of the embodiments above may be distributed across multiple computers or systems or may be co-located in a single processor or system. Exemplary structural embodiment alternatives suitable for implementing the modules, sections, systems, means, or processes described herein are provided below.
- the modules, processors or systems described above can be implemented as a programmed general purpose computer, an electronic device programmed with microcode, a hard-wired analog logic circuit, software stored on a computer-readable medium or signal, a programmed kiosk, an optical computing device, a GUI on a display, a networked system of electronic and/or optical devices, a special purpose computing device, an integrated circuit device, a semiconductor chip, and a software module or object stored on a computer-readable medium or signal, for example.
- Embodiments of the method and system may be implemented on a general-purpose computer, a special-purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element, an ASIC or other integrated circuit, a digital signal processor, a hardwired electronic or logic circuit such as a discrete element circuit, a programmed logic circuit such as a PLD, PLA, FPGA, PAL, or the like.
- any process capable of implementing the functions or steps described herein can be used to implement embodiments of the method, system, or a computer program product (software program).
- embodiments of the disclosed method, system, and computer program product may be readily implemented, fully or partially, in software using, for example, object or object-oriented software development environments that provide portable source code that can be used on a variety of computer platforms.
- embodiments of the disclosed method, system, and computer program product can be implemented partially or fully in hardware using, for example, standard logic circuits or a VLSI design.
- Other hardware or software can be used to implement embodiments depending on the speed and/or efficiency requirements of the systems, the particular function, and/or particular software or hardware system, microprocessor, or microcomputer being utilized.
- Embodiments of the method, system, and computer program product can be implemented in hardware and/or software using any known or later developed systems or structures, devices and/or software by those of ordinary skill in the applicable art from the function description provided herein and with a general basic knowledge of the computer, image processing, radiographic, and/or threat detection arts.
- embodiments of the disclosed method, system, and computer program product can be implemented in software executed on a programmed general purpose computer, a special purpose computer, a microprocessor, or the like.
- the method of this invention can be implemented as a program embedded on a personal computer such as a JAVA® or CGI script, as a resource residing on a server or image processing workstation, as a routine embedded in a dedicated processing system, or the like.
- the method and system can also be implemented by physically incorporating the method into a software and/or hardware system, such as the hardware and software systems of multi-energy radiographic cargo inspection systems.
Abstract
A system for segmenting radiographic images of a cargo container can include an object segmentation recognition module adapted to perform a series of functions. The functions can include receiving a plurality of radiographic images of a cargo container, each image generated using a different energy level and segmenting each of the radiographic images using one or more segmentation modules to generate segmentation data representing one or more image segments. The functions can also include identifying image layers within the radiographic images using a plurality of layer analysis modules by providing the plurality of radiographic images and the segmentation data as input to the layer analysis modules, and determining adjusted atomic number values for an atomic number image based on the image layers. The functions can include adjusting the atomic number image based on the adjusted atomic number values for the regions of interest to generate an adjusted atomic number image and identifying regions of interest within the adjusted atomic number image based on an image characteristic. The functions can also include providing coordinates of each region of interest and the adjusted atomic number image as output.
Description
- The present application claims the benefit of U.S. Provisional Patent Application No. 60/940,632, entitled “Threat Detection System”, filed May 29, 2007, which is incorporated herein by reference in its entirety.
- Embodiments of the present invention relate generally to image segmentation and, more particularly, to computer systems and methods for automatic image segmentation of radiographic images.
- Image segmentation, the process of separating objects of interest from the background (or from other objects) in an image, is typically a difficult task for a computer to perform. If an image scene is simple and the contrast between objects in the scene and the background is high, then the task may be somewhat easier. However, if an image scene is cluttered and the contrast between objects in the scene and the background (or other objects) is low, image segmentation can be a particularly difficult problem. For example, in a radiographic image of a three-dimensional object such as a cargo container there can be numerous layers of objects and contrast may be low between the objects and the background. In addition to the difficulties often associated with low contrast and cluttered scenes, radiographic images of objects having layers may also present a need to segment the image in two ways: in the x-y plane (i.e., the plane the image was produced on) and by layer of depth in order to correct for layer effects such as overlapping.
- Embodiments of the present invention can be used in an imaging system, such as a nuclear material detection system, that includes a capability of producing images using different energy levels. Each energy level provides a different imaging characteristic such as energy penetration of the object being scanned. Different images produced using different energy levels can be used in conjunction with each other to better identify layers within the object being scanned.
- Embodiments of the present invention address the above problems and other problems often associated with segmenting radiographic images. For example, one exemplary embodiment can include a system for segmenting radiographic images of a cargo container. The system can include an object segmentation recognition module adapted to perform a series of functions. The functions can include receiving a plurality of radiographic images of a cargo container, each image generated using a different energy level and segmenting each of the radiographic images using one or more segmentation modules to generate segmentation data representing one or more image segments. The functions can also include identifying image layers within the radiographic images using a plurality of layer analysis modules by providing the plurality of radiographic images and the segmentation data as input to the layer analysis modules, and determining adjusted atomic number values for an atomic number image based on the image layers. The functions can include adjusting the atomic number image based on the adjusted atomic number values for the regions of interest to generate an adjusted atomic number image and identifying regions of interest within the adjusted atomic number image based on an image characteristic. The functions can also include providing coordinates of each region of interest and the adjusted atomic number image as output.
- Another embodiment includes a method for segmenting radiographic images. The method can include providing a plurality of radiographic images and providing a pixel level estimated atomic number image. The method can also include segmenting the radiographic images using one or more segmentation modules to generate segmentation data and identifying image layers within each of the radiographic images. The method can also include analyzing the image layers using the plurality of radiographic images and the segmentation data in order to determine corrected atomic number values for objects in the images, and correcting the pixel level atomic number image based on the corrected atomic number values to generate an object level atomic number image. The method can include identifying regions of interest within the object level atomic number image based on an image characteristic, and outputting coordinates of each region of interest and the object level atomic number image as output.
- Another embodiment includes a radiographic image segmentation apparatus. The apparatus can include means for receiving a radiographic image and means for segmenting each of the radiographic images using one or more segmentation modules to generate segmentation data. The apparatus can also include means for identifying image layers to produce image layer data and means for identifying regions of interest based on the segmentation data and image layer data. The apparatus can also include means for analyzing the image layers in order to determine corrected atomic number values for the regions of interest and means for correcting an atomic number image based on the corrected atomic number values for the regions of interest. The apparatus can also include means for outputting region of interest coordinates and the corrected atomic number image as output.
-
FIG. 1 is a block diagram of an exemplary object segmentation recognition processor showing inputs and outputs; -
FIG. 2 is a block diagram of an exemplary object segmentation recognition processor showing an exemplary OSR processor in detail; -
FIG. 3 is a flowchart showing an exemplary method for image segmentation; and -
FIG. 4 is a block diagram of an exemplary object segmentation recognition apparatus showing data flow and processing modules. -
FIG. 1 shows a block diagram of an exemplary object segmentation recognition processor showing inputs and outputs. In particular, an object segmentation recognition (OSR)processor 102 is shown receiving one ormore images 104 as input and providing region of interest (ROI) orobject coordinates 106 as output. - In operation, the
images 104 provided or obtained as input to theOSR processor 102 can include radiographic images or other images. For example, theimages 104 can include radiographic images of a cargo conveyance such as a cargo container. Theimages 104 can include one or more images, for example four images can be provided with each image being generated using a different radiographic energy level. Also, theimages 104 can include radiographic images or other images derived from radiographic images, such as, for example, an atomic number image representing estimated atomic numbers associated with radiographic images. - The OSR
processor 102 can obtain, request or receive theimages 104 via a wired or wireless connection, such as a network (e.g., LAN, WAN, wireless network, Internet or the like) or direct connection within a system. The OSRprocessor 102 can also receive theimages 104 via a software connection (e.g., procedure call, standard object access protocol, remote procedure call, or the like). In general, any known or later developed wired, wireless or software connection suitable for transmitting data can be used to supply theimages 104 to the OSRprocessor 102. The OSRprocessor 102 can be requested to segment images by another process or system, or can request images for segmenting from another process or system. If theimages 104 include more than one image, the images can be registered prior to being sent for segmentation. - The OSR
processor 102 processes theimages 104 to segment theimages 104 and identify objects within theimages 104. The OSRprocessor 102 can also extract or identify layers (or estimated layers) within the images in order to help segment the images more accurately. The layer information can also be used to correct or adjust estimated atomic numbers in an atomic number image or map. The atomic number image or map can include a representation of estimated atomic numbers determined from theimages 104. - Once the
images 104 have been segmented and the layer information has been determined, regions of interest (ROIs) within theimages 104 can be located or determined. The ROIs can be determined based on an image characteristic such as estimated atomic number of the ROI (or object), shape of the ROI, position or location of the ROI, or the like. The OSRprocessor 102 can provide ROI/object coordinates 106 as output. The ROI/object coordinates 106 can be associated with theinput images 104 or an atomic number image. The output ROI/object coordinates 106 can be outputted via a wired or wireless connection, such as a network (e.g., LAN, WAN, Internet or the like) or direct connection within a system. The output ROI/object coordinates 106 can be outputted via a software connection (e.g., response to a procedure call, standard object access protocol, remote procedure call, or the like). -
FIG. 2 is a block diagram of an exemplary object segmentation recognition processor showing an exemplary OSR processor in detail. In addition to the components already described above, theOSR processor 102 includes asegment processing section 202 having a connectedregion analysis module 204, anedge analysis module 206, a ratiolayer analysis module 208 and a blind source separation (BSS)layer analysis module 210. The OSRprocessor 102 also includes an object ROI section 212 having a layer analysis andsegment association module 214 and an objectROI determination module 216. - In operation, the segment processing section receives the
images 104. Once received, theimages 104 can be processed using one or more image segmentation modules (e.g., the connectedregion analysis module 204, theedge analysis module 206, or a combination of the above). It will be appreciated that the segmentation modules shown are for illustration purposes and that any known or later developed image segmentation processes can be used. Also, the selection of the number and type of image segmentation modules employed in theOSR processor 102 may depend on a contemplated use of an embodiment and the selection may be guided by a number of factors including, but not limited to, type of materials being scanned, configuration of the scanning system and objects being scanned, desired performance characteristics, time available for processing, or the like. The individual segmentation processes, layer analysis processes, or both may be performed sequentially or in parallel or a combination of the above and in any suitable order. - Once the segmentation processing (object segmentation, layer analysis, or both) has been completed, the resulting image segment data can be provided to the object ROI section 212. In the object ROI section 212, the layers and segments of the image segment data are analyzed using layer analysis and
segment association module 214 and combined or associated to produce segment-layer data that contains information about objects and layers within theimages 104. Theimages 104 can be processed by one or more layer analysis modules (e.g., the ratiolayer analysis module 208, the BSSlayer analysis module 210, or a combination of the above) within the layer analysis andsegment association module 214. - The segment-layer data can be in the form of an atomic number image that represents a composite of the
images 104 and has been adjusted or corrected based on layers and segments to provide an image suitable for identification of ROIs. The segment-layer data can also be represented in any form suitable for transmitting the information that may be needed to analyze theimages 104. The segment-layer data is then provided to the objectROI determination module 216 for analysis and identification of ROIs. - The object
ROI determination module 216 can use one or more image characteristics to identify ROIs within theimages 104 or the segment-layer data. Image characteristics can include an estimated atomic number for a portion of the image (e.g., a pixel, segment, object, region or the like), a shape of a segment or object within the image, or a position or location of an object or segment. In general, any image characteristic that is suitable for identifying an ROI can be used. - Once the ROIs have been determined, coordinate data (106) representing each ROI can be provided as output. The output can be provided as described above in connection with
reference number 106 ofFIG. 1 . Also, segment-layer data or an adjusted or corrected atomic number image can be provided in addition to, or as a substitute for, the ROI coordinates. -
FIG. 3 is a flowchart showing an exemplary computer implemented method for image segmentation. Processing begins atstep 302 and continues to step 304. - In
step 304, one or more radiographic images are obtained. These images can be provided by an imaging system (e.g., an x-ray, magnetic resonance imaging device, computerized tomography device, or the like). In general, any imaging device suitable for generating images that may require segmenting can be used. Processing continues to step 306. - In
step 306, the radiographic images are segmented. The segmentation can be performed using one or more image segmentation processes. Examples of segmentation methods include modules or processes for segmentation based on clustering, histograms, edge detection, region growing, level set, graph partitioning, watershed, model based, and multi-scale. Processing continues to step 308. - In
step 308, any layers present in the images are determined. The layers can be determined using one or more layer extraction or identification processes. For example, a ratio layer analysis process and a BSS layer analysis process can be used together to identify layers in the images. For example, using thelayer analysis module 214 mentioned above. A goal of layer identification and extraction is to remove overlapping effects which may be present. By removing overlapping effects, the true gray level of a material can be determined. Using the true gray level, a material's effective atomic number (and, optionally, material density) can be determined. Using the effective atomic number the composition of the material can help in determining illicit materials, such as special nuclear materials can be detected automatically. - The ratio method of layer identification and overlap effect removal is known in the art as applied to dual energy and is described in “The Utility of X-ray Dual-energy Transmission and Scatter Technologies for Illicit Material Detection,” a published Ph.D. Dissertation by Lu Qiang, Virginia Polytechnic Institute and State University, Blacksburg, Va., 1999, which is incorporated herein by reference in its entirety. Generally, the ratio method provides a process whereby a computer can solve for one image layer and remove any overlapping effects of another layer. Thus, regions that overlap or may be obscured can be separated into their constituent layers and a true gray level can be determined for each layer. The true gray level can be used to more accurately determine an estimated atomic number an/or material type.
- Blind source separation (or blind signal separation) is a technique known in the art, and refers generally to the separation of a set of signals from a set of mixed signals, without the aid of information (or with very little information) about the source signals or the mixing process. However, if information about how the signals were mixed (e.g., a mixing matrix) can be estimated, it can then be used to determine an un-mixing matrix which can be applied to separate the components within a mixed signal.
- The BSS method may be limited by the amount of independence between materials within the mixture. Several techniques exist for estimating the mixing matrix, some include using an unsupervised learning process. The process can include incrementally changing and weighting coefficients of the mixing matrix and evaluating the mixing matrix until optimal conditions are met. Once the mixing matrix is estimated, un-mixing coefficients can be computed. Examples of some BSS techniques include projection pursuit gradient ascent, projection pursuit stepwise separation, ICA gradient ascent, and complexity pursuit gradient ascent. In general, an iterative hill climbing or other type of optimization process can be used to estimate the mixing matrix and determine an optimal matrix. Also, contemplated or desired performance levels may require development of custom algorithms that can be tuned to a specific empirical terrain provided by the mixing and un-mixing matrices. Once the layers are identified and overlapping effects are removed, processing continues to step 310.
- In
step 310, segments that have been identified are associated with any layers that have been determined or identified instep 308. Associating segments with layers can help to remove any overlapping effects and also can improve the ability to determine a true gray value for a segment. Processing continues to step 312. - In
step 312, ROIs are determined. The ROIs can be determined based on an image characteristic as described above. Processing continues to step 314. - In
step 314, a gray level atomic number image is optionally adjusted to reflect the corrections or adjustments provided by the layer determination. The adjustments or corrections can include changes related to removal of overlap effects or other changes. Processing continues to step 316. - In
step 316, the ROI coordinates and, optionally, the adjusted or corrected gray level image are provided as output to an operator or another system. The output can be in a standard format or in a proprietary format. Processing continues to step 318 where the method ends. It will be appreciated that steps 304-316 can be repeated in whole or in part to perform a contemplated image segmentation process. -
FIG. 4 is a block diagram of an exemplary object segmentation recognition apparatus showing data flow and processing modules. In particular, four gray scale radiographic images (402-408), each generated using a different energy level, are provided to an effective Z-value determination module 410. The effective Z-value determination module determines a pixel-level Z-valuegray scale image 412. - The pixel-level Z-value
gray scale image 412 can be provided to an image segmentation andlayer analysis module 414. The segmentation andlayer analysis module 414 segments the image and analyzes layers, as described above, to generate a layer corrected image representing true gray values, ROI coordinates, or both. - It will be appreciated that the modules, processes, systems, and sections described above can be implemented in hardware, software, or both. Also, the modules, processes systems, and sections can be implemented as a single processor or as a distributed processor. Further, it should be appreciated that the steps mentioned above may be performed on a single or distributed processor. Also, the processes, modules, and sub-modules described in the various figures of the embodiments above may be distributed across multiple computers or systems or may be co-located in a single processor or system. Exemplary structural embodiment alternatives suitable for implementing the modules, sections, systems, means, or processes described herein are provided below.
- The modules, processors or systems described above can be implemented as a programmed general purpose computer, an electronic device programmed with microcode, a hard-wired analog logic circuit, software stored on a computer-readable medium or signal, a programmed kiosk, an optical computing device, a GUI on a display, a networked system of electronic and/or optical devices, a special purpose computing device, an integrated circuit device, a semiconductor chip, and a software module or object stored on a computer-readable medium or signal, for example.
- Embodiments of the method and system (or their sub-components or modules), may be implemented on a general-purpose computer, a special-purpose computer, a programmed microprocessor or microcontroller and peripheral integrated circuit element, an ASIC or other integrated circuit, a digital signal processor, a hardwired electronic or logic circuit such as a discrete element circuit, a programmed logic circuit such as a PLD, PLA, FPGA, PAL, or the like. In general, any process capable of implementing the functions or steps described herein can be used to implement embodiments of the method, system, or a computer program product (software program).
- Furthermore, embodiments of the disclosed method, system, and computer program product may be readily implemented, fully or partially, in software using, for example, object or object-oriented software development environments that provide portable source code that can be used on a variety of computer platforms. Alternatively, embodiments of the disclosed method, system, and computer program product can be implemented partially or fully in hardware using, for example, standard logic circuits or a VLSI design. Other hardware or software can be used to implement embodiments depending on the speed and/or efficiency requirements of the systems, the particular function, and/or particular software or hardware system, microprocessor, or microcomputer being utilized. Embodiments of the method, system, and computer program product can be implemented in hardware and/or software using any known or later developed systems or structures, devices and/or software by those of ordinary skill in the applicable art from the function description provided herein and with a general basic knowledge of the computer, image processing, radiographic, and/or threat detection arts.
- Moreover, embodiments of the disclosed method, system, and computer program product can be implemented in software executed on a programmed general purpose computer, a special purpose computer, a microprocessor, or the like. Also, the method of this invention can be implemented as a program embedded on a personal computer such as a JAVA® or CGI script, as a resource residing on a server or image processing workstation, as a routine embedded in a dedicated processing system, or the like. The method and system can also be implemented by physically incorporating the method into a software and/or hardware system, such as the hardware and software systems of multi-energy radiographic cargo inspection systems.
- It is, therefore, apparent that there is provided, in accordance with the present invention, a method, computer system, and computer software program for image segmentation. While this invention has been described in conjunction with a number of embodiments, it is evident that many alternatives, modifications and variations would be or are apparent to those of ordinary skill in the applicable arts. Accordingly, Applicant intends to embrace all such alternatives, modifications, equivalents and variations that are within the spirit and scope of this invention.
Claims (20)
1. A system for segmenting radiographic images of a cargo container, the system comprising:
an object segmentation recognition module adapted to perform a series of functions including:
receiving a plurality of radiographic images of a cargo container, each image generated using a different energy level;
segmenting each of the radiographic images using one or more segmentation modules to generate segmentation data representing one or more image segments;
identifying image layers within the radiographic images using a plurality of layer analysis modules by providing the plurality of radiographic images and the segmentation data as input to the layer analysis modules, and determining adjusted atomic number values for an atomic number image based on the image layers;
adjusting the atomic number image based on the adjusted atomic number values for the regions of interest to generate an adjusted atomic number image;
identifying regions of interest within the adjusted atomic number image based on an image characteristic; and
providing coordinates of each region of interest and the adjusted atomic number image as output.
2. The system of claim 1 , wherein the plurality of radiographic images are generated using four energy levels.
3. The system of claim 1 , wherein identifying regions of interest includes comparing an estimated atomic value of each image segment to a threshold value.
4. The system of claim 1 , wherein the plurality of layer analysis modules include a first layer analysis module and a second layer analysis module and the function of identifying image layers includes combining the output of the first layer analysis module with the second layer analysis module.
5. The system of claim 1 , wherein the image characteristic is an estimated atomic value of a portion of the atomic number image.
6. The system of claim 1 , wherein the image characteristic includes an image segment shape.
7. The system of claim 1 , wherein the function of providing coordinates of each region of interest includes providing the coordinates of each region of interest and the corrected atomic number image to an operator station.
8. The system of claim 1 , wherein the function of providing coordinates of each region of interest includes providing the coordinates of each region of interest and the corrected atomic number image to another system.
9. A method for segmenting radiographic images comprising:
providing a plurality of radiographic images;
providing a pixel level estimated atomic number image;
segmenting the radiographic images using one or more segmentation modules to generate segmentation data;
identifying image layers within each of the radiographic images;
analyzing the image layers using the plurality of radiographic images and the segmentation data in order to determine corrected atomic number values for objects in the images;
correcting the pixel level atomic number image based on the corrected atomic number values to generate an object level atomic number image;
identifying regions of interest within the object level atomic number image based on an image characteristic; and
outputting coordinates of each region of interest and the object level atomic number image as output.
10. The method of claim 9 , wherein the plurality of radiographic images are images of a cargo container.
11. The method of claim 9 , wherein the plurality of radiographic images includes four images each image being generated using a different energy level.
12. The method of claim 9 , wherein identifying regions of interest includes comparing an estimated atomic value of each image object to a threshold value.
13. The method of claim 9 , wherein the step of identifying image layers includes using a first layer analysis module and a second layer analysis module and combining the output of the first layer analysis module with the second layer analysis module.
14. The method of claim 9 , wherein the image characteristic is an estimated atomic value of a portion of the object level atomic number image.
15. A radiographic image segmentation apparatus comprising:
means for receiving a radiographic image;
means for segmenting each of the radiographic images using one or more segmentation modules to generate segmentation data;
means for identifying image layers to produce image layer data;
means for identifying regions of interest based on the segmentation data and image layer data;
means for analyzing the image layers in order to determine corrected atomic number values for the regions of interest;
means for correcting an atomic number image based on the corrected atomic number values for the regions of interest; and
means for outputting region of interest coordinates and the corrected atomic number image as output.
16. The apparatus of claim 15 , wherein the radiographic image is an image of a cargo container.
17. The apparatus of claim 15 , wherein the means for receiving a radiographic image further includes means for receiving a plurality of radiographic images.
18. The apparatus of claim 17 , wherein each radiographic image is generated using a different energy level.
19. The apparatus of claim 18 , wherein the means for analyzing the image layers includes using the plurality of radiographic images and the segmentation data.
20. The apparatus of claim 15 , wherein the means for identifying image layers includes first means for identifying image layers and second means for identifying image layers, the image layer data being produced using output from the first means for identifying image layers and output from the second means for identifying image layers.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/129,036 US20090003651A1 (en) | 2007-05-29 | 2008-05-29 | Object segmentation recognition |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US94063207P | 2007-05-29 | 2007-05-29 | |
US12/129,036 US20090003651A1 (en) | 2007-05-29 | 2008-05-29 | Object segmentation recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
US20090003651A1 true US20090003651A1 (en) | 2009-01-01 |
Family
ID=40088192
Family Applications (7)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/129,383 Expired - Fee Related US8094874B2 (en) | 2007-05-29 | 2008-05-29 | Material context analysis |
US12/129,055 Abandoned US20090052622A1 (en) | 2007-05-29 | 2008-05-29 | Nuclear material detection system |
US12/129,036 Abandoned US20090003651A1 (en) | 2007-05-29 | 2008-05-29 | Object segmentation recognition |
US12/129,371 Abandoned US20090052762A1 (en) | 2007-05-29 | 2008-05-29 | Multi-energy radiographic system for estimating effective atomic number using multiple ratios |
US12/129,439 Abandoned US20080298544A1 (en) | 2007-05-29 | 2008-05-29 | Genetic tuning of coefficients in a threat detection system |
US12/129,393 Abandoned US20090055344A1 (en) | 2007-05-29 | 2008-05-29 | System and method for arbitrating outputs from a plurality of threat analysis systems |
US12/129,410 Abandoned US20090003699A1 (en) | 2007-05-29 | 2008-05-29 | User guided object segmentation recognition |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/129,383 Expired - Fee Related US8094874B2 (en) | 2007-05-29 | 2008-05-29 | Material context analysis |
US12/129,055 Abandoned US20090052622A1 (en) | 2007-05-29 | 2008-05-29 | Nuclear material detection system |
Family Applications After (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/129,371 Abandoned US20090052762A1 (en) | 2007-05-29 | 2008-05-29 | Multi-energy radiographic system for estimating effective atomic number using multiple ratios |
US12/129,439 Abandoned US20080298544A1 (en) | 2007-05-29 | 2008-05-29 | Genetic tuning of coefficients in a threat detection system |
US12/129,393 Abandoned US20090055344A1 (en) | 2007-05-29 | 2008-05-29 | System and method for arbitrating outputs from a plurality of threat analysis systems |
US12/129,410 Abandoned US20090003699A1 (en) | 2007-05-29 | 2008-05-29 | User guided object segmentation recognition |
Country Status (1)
Country | Link |
---|---|
US (7) | US8094874B2 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090003699A1 (en) * | 2007-05-29 | 2009-01-01 | Peter Dugan | User guided object segmentation recognition |
CN104778444A (en) * | 2015-03-10 | 2015-07-15 | 公安部交通管理科学研究所 | Method for analyzing apparent characteristic of vehicle image in road scene |
US9476923B2 (en) | 2011-06-30 | 2016-10-25 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Method and device for identifying a material by the spectral analysis of electromagnetic radiation passing through said material |
RU2721182C2 (en) * | 2014-09-10 | 2020-05-18 | Смитс Хейманн Сас | Determining degree of homogeneity in images |
Families Citing this family (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102007028895B4 (en) * | 2007-06-22 | 2010-07-15 | Siemens Ag | Method for segmenting structures in 3D image data sets |
US8200015B2 (en) * | 2007-06-22 | 2012-06-12 | Siemens Aktiengesellschaft | Method for interactively segmenting structures in image data records and image processing unit for carrying out the method |
KR20100038046A (en) * | 2008-10-02 | 2010-04-12 | 가부시키가이샤 한도오따이 에네루기 켄큐쇼 | Touch panel and method for driving the same |
KR20110032047A (en) * | 2009-09-22 | 2011-03-30 | 삼성전자주식회사 | Multi-energy x-ray system, multi-energy x-ray material discriminated image processing unit, and method for processing material discriminated images of the multi-energy x-ray system |
JP5740132B2 (en) | 2009-10-26 | 2015-06-24 | 株式会社半導体エネルギー研究所 | Display device and semiconductor device |
US9036782B2 (en) * | 2010-08-06 | 2015-05-19 | Telesecurity Sciences, Inc. | Dual energy backscatter X-ray shoe scanning device |
US20120113146A1 (en) * | 2010-11-10 | 2012-05-10 | Patrick Michael Virtue | Methods, apparatus and articles of manufacture to combine segmentations of medical diagnostic images |
US8924325B1 (en) * | 2011-02-08 | 2014-12-30 | Lockheed Martin Corporation | Computerized target hostility determination and countermeasure |
EP2677936B1 (en) * | 2011-02-25 | 2021-09-29 | Smiths Heimann GmbH | Image reconstruction based on parametric models |
US9824467B2 (en) * | 2011-06-30 | 2017-11-21 | Analogic Corporation | Iterative image reconstruction |
US9705607B2 (en) * | 2011-10-03 | 2017-07-11 | Cornell University | System and methods of acoustic monitoring |
JP5895624B2 (en) * | 2012-03-14 | 2016-03-30 | オムロン株式会社 | Image processing apparatus, image processing method, control program, and recording medium |
US9589188B2 (en) * | 2012-11-14 | 2017-03-07 | Varian Medical Systems, Inc. | Method and apparatus pertaining to identifying objects of interest in a high-energy image |
GB2508841A (en) * | 2012-12-12 | 2014-06-18 | Ibm | Computing prioritised general arbitration rules for conflicting rules |
US9697467B2 (en) | 2014-05-21 | 2017-07-04 | International Business Machines Corporation | Goal-driven composition with preferences method and system |
US9785755B2 (en) | 2014-05-21 | 2017-10-10 | International Business Machines Corporation | Predictive hypothesis exploration using planning |
US9118714B1 (en) * | 2014-07-23 | 2015-08-25 | Lookingglass Cyber Solutions, Inc. | Apparatuses, methods and systems for a cyber threat visualization and editing user interface |
CN104482996B (en) * | 2014-12-24 | 2019-03-15 | 胡桂标 | The material kind of passive nuclear level sensing device corrects measuring system |
US9687207B2 (en) * | 2015-04-01 | 2017-06-27 | Toshiba Medical Systems Corporation | Pre-reconstruction calibration, data correction, and material decomposition method and apparatus for photon-counting spectrally-resolving X-ray detectors and X-ray imaging |
US10078150B2 (en) | 2015-04-14 | 2018-09-18 | Board Of Regents, The University Of Texas System | Detecting and quantifying materials in containers utilizing an inverse algorithm with adaptive regularization |
US9760801B2 (en) * | 2015-05-12 | 2017-09-12 | Lawrence Livermore National Security, Llc | Identification of uncommon objects in containers |
IL239191A0 (en) * | 2015-06-03 | 2015-11-30 | Amir B Geva | Image classification system |
CN106353828B (en) | 2015-07-22 | 2018-09-21 | 清华大学 | The method and apparatus that checked property body weight is estimated in safe examination system |
US11836650B2 (en) | 2016-01-27 | 2023-12-05 | Microsoft Technology Licensing, Llc | Artificial intelligence engine for mixing and enhancing features from one or more trained pre-existing machine-learning models |
US10733531B2 (en) | 2016-01-27 | 2020-08-04 | Bonsai AI, Inc. | Artificial intelligence engine having an architect module |
US11841789B2 (en) | 2016-01-27 | 2023-12-12 | Microsoft Technology Licensing, Llc | Visual aids for debugging |
US11868896B2 (en) | 2016-01-27 | 2024-01-09 | Microsoft Technology Licensing, Llc | Interface for working with simulations on premises |
US20180357543A1 (en) * | 2016-01-27 | 2018-12-13 | Bonsai AI, Inc. | Artificial intelligence system configured to measure performance of artificial intelligence over time |
US11775850B2 (en) | 2016-01-27 | 2023-10-03 | Microsoft Technology Licensing, Llc | Artificial intelligence engine having various algorithms to build different concepts contained within a same AI model |
US10204226B2 (en) | 2016-12-07 | 2019-02-12 | General Electric Company | Feature and boundary tuning for threat detection in industrial asset control system |
US11120297B2 (en) * | 2018-11-30 | 2021-09-14 | International Business Machines Corporation | Segmentation of target areas in images |
US10939044B1 (en) * | 2019-08-27 | 2021-03-02 | Adobe Inc. | Automatically setting zoom level for image capture |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070211248A1 (en) * | 2006-01-17 | 2007-09-13 | Innovative American Technology, Inc. | Advanced pattern recognition systems for spectral analysis |
US20090010386A1 (en) * | 2003-09-15 | 2009-01-08 | Peschmann Kristian R | Methods and Systems for Rapid Detection of Concealed Objects Using Fluorescence |
US20090052622A1 (en) * | 2007-05-29 | 2009-02-26 | Peter Dugan | Nuclear material detection system |
Family Cites Families (57)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US538758A (en) * | 1895-05-07 | Richard watkins | ||
US73518A (en) * | 1868-01-21 | Luke fitzpatrick and jacob schinneller | ||
EP0161324B1 (en) * | 1984-05-14 | 1987-11-25 | Matsushita Electric Industrial Co., Ltd. | Quantum-counting radiography method and apparatus |
US5132998A (en) * | 1989-03-03 | 1992-07-21 | Matsushita Electric Industrial Co., Ltd. | Radiographic image processing method and photographic imaging apparatus therefor |
US5319547A (en) * | 1990-08-10 | 1994-06-07 | Vivid Technologies, Inc. | Device and method for inspection of baggage and other objects |
US5600303A (en) * | 1993-01-15 | 1997-02-04 | Technology International Incorporated | Detection of concealed explosives and contraband |
US5600700A (en) * | 1995-09-25 | 1997-02-04 | Vivid Technologies, Inc. | Detecting explosives or other contraband by employing transmitted and scattered X-rays |
US5642393A (en) * | 1995-09-26 | 1997-06-24 | Vivid Technologies, Inc. | Detecting contraband by employing interactive multiprobe tomography |
US6018562A (en) * | 1995-11-13 | 2000-01-25 | The United States Of America As Represented By The Secretary Of The Army | Apparatus and method for automatic recognition of concealed objects using multiple energy computed tomography |
US6026171A (en) * | 1998-02-11 | 2000-02-15 | Analogic Corporation | Apparatus and method for detection of liquids in computed tomography data |
US6236709B1 (en) * | 1998-05-04 | 2001-05-22 | Ensco, Inc. | Continuous high speed tomographic imaging system and method |
US7394363B1 (en) * | 1998-05-12 | 2008-07-01 | Bahador Ghahramani | Intelligent multi purpose early warning system for shipping containers, components therefor and methods of making the same |
US6282305B1 (en) * | 1998-06-05 | 2001-08-28 | Arch Development Corporation | Method and system for the computerized assessment of breast cancer risk |
US6567496B1 (en) * | 1999-10-14 | 2003-05-20 | Sychev Boris S | Cargo inspection apparatus and process |
DE19954663B4 (en) * | 1999-11-13 | 2006-06-08 | Smiths Heimann Gmbh | Method and device for determining a material of a detected object |
CA2348150C (en) * | 2000-05-25 | 2007-03-13 | Esam M.A. Hussein | Non-rotating x-ray system for three-dimensional, three-parameter imaging |
US20020186875A1 (en) * | 2001-04-09 | 2002-12-12 | Burmer Glenna C. | Computer methods for image pattern recognition in organic material |
US6969861B2 (en) * | 2001-10-02 | 2005-11-29 | Konica Corporation | Cassette for radiographic imaging, radiographic image reading apparatus and radiographic image reading method |
WO2003038749A1 (en) * | 2001-10-31 | 2003-05-08 | Icosystem Corporation | Method and system for implementing evolutionary algorithms |
US6816571B2 (en) * | 2002-02-06 | 2004-11-09 | L-3 Communications Security And Detection Systems Corporation Delaware | Method and apparatus for transmitting information about a target object between a prescanner and a CT scanner |
WO2003067371A2 (en) * | 2002-02-08 | 2003-08-14 | Giger Maryellen L | Method and system for risk-modulated diagnosis of disease |
US7162005B2 (en) * | 2002-07-19 | 2007-01-09 | Varian Medical Systems Technologies, Inc. | Radiation sources and compact radiation scanning systems |
US7356115B2 (en) * | 2002-12-04 | 2008-04-08 | Varian Medical Systems Technology, Inc. | Radiation scanning units including a movable platform |
US7103137B2 (en) * | 2002-07-24 | 2006-09-05 | Varian Medical Systems Technology, Inc. | Radiation scanning of objects for contraband |
AU2003270910A1 (en) * | 2002-09-27 | 2004-04-19 | Scantech Holdings, Llc | System for alternately pulsing energy of accelerated electrons bombarding a conversion target |
AU2003294600A1 (en) * | 2002-12-10 | 2004-06-30 | Digitome Corporation | Volumetric 3d x-ray imaging system for baggage inspection including the detection of explosives |
WO2005024845A2 (en) * | 2003-04-08 | 2005-03-17 | Lawrence Berkeley National Laboratory | Detecting special nuclear materials in containers using high-energy gamma rays emitted by fission products |
US20050058242A1 (en) * | 2003-09-15 | 2005-03-17 | Peschmann Kristian R. | Methods and systems for the rapid detection of concealed objects |
US7092485B2 (en) | 2003-05-27 | 2006-08-15 | Control Screening, Llc | X-ray inspection system for detecting explosives and other contraband |
US6937692B2 (en) * | 2003-06-06 | 2005-08-30 | Varian Medical Systems Technologies, Inc. | Vehicle mounted inspection systems and methods |
US7697743B2 (en) * | 2003-07-03 | 2010-04-13 | General Electric Company | Methods and systems for prescribing parameters for tomosynthesis |
US7433507B2 (en) * | 2003-07-03 | 2008-10-07 | Ge Medical Systems Global Technology Co. | Imaging chain for digital tomosynthesis on a flat panel detector |
US7492855B2 (en) * | 2003-08-07 | 2009-02-17 | General Electric Company | System and method for detecting an object |
US7366282B2 (en) * | 2003-09-15 | 2008-04-29 | Rapiscan Security Products, Inc. | Methods and systems for rapid detection of concealed objects using fluorescence |
WO2005022554A2 (en) * | 2003-08-27 | 2005-03-10 | Scantech Holdings, Llc | Radiographic inspection system |
WO2005111590A2 (en) * | 2004-02-06 | 2005-11-24 | Scantech Holdings, Llc | Non-intrusive inspection systems for large container screening and inspection |
US7609807B2 (en) * | 2004-02-17 | 2009-10-27 | General Electric Company | CT-Guided system and method for analyzing regions of interest for contraband detection |
US7423273B2 (en) * | 2004-03-01 | 2008-09-09 | Varian Medical Systems Technologies, Inc. | Object examination by delayed neutrons |
US7340443B2 (en) * | 2004-05-14 | 2008-03-04 | Lockheed Martin Corporation | Cognitive arbitration system |
US7190757B2 (en) * | 2004-05-21 | 2007-03-13 | Analogic Corporation | Method of and system for computing effective atomic number images in multi-energy computed tomography |
US20060269140A1 (en) * | 2005-03-15 | 2006-11-30 | Ramsay Thomas E | System and method for identifying feature of interest in hyperspectral data |
US7356118B2 (en) | 2004-10-22 | 2008-04-08 | Scantech Holdings, Llc | Angled-beam detection system for container inspection |
WO2007011403A2 (en) * | 2004-10-22 | 2007-01-25 | Scantech Holdings, Llc | Cryptographic container security system |
US20060256914A1 (en) * | 2004-11-12 | 2006-11-16 | Might Matthew B | Non-intrusive container inspection system using forward-scattered radiation |
US7847260B2 (en) * | 2005-02-04 | 2010-12-07 | Dan Inbar | Nuclear threat detection |
US20060204107A1 (en) | 2005-03-04 | 2006-09-14 | Lockheed Martin Corporation | Object recognition system using dynamic length genetic training |
US7336767B1 (en) * | 2005-03-08 | 2008-02-26 | Khai Minh Le | Back-scattered X-ray radiation attenuation method and apparatus |
CA2608119A1 (en) * | 2005-05-11 | 2006-11-16 | Optosecurity Inc. | Method and system for screening luggage items, cargo containers or persons |
US7261466B2 (en) * | 2005-06-01 | 2007-08-28 | Endicott Interconnect Technologies, Inc. | Imaging inspection apparatus with directional cooling |
CN100582758C (en) * | 2005-11-03 | 2010-01-20 | 清华大学 | Method and apparatus for recognizing materials by using fast neutrons and continuous energy spectrum X rays |
EP1951119A2 (en) * | 2005-11-09 | 2008-08-06 | Dexela Limited | Methods and apparatus for obtaining low-dose imaging |
US7536365B2 (en) * | 2005-12-08 | 2009-05-19 | Northrop Grumman Corporation | Hybrid architecture for acquisition, recognition, and fusion |
US7483511B2 (en) * | 2006-06-06 | 2009-01-27 | Ge Homeland Protection, Inc. | Inspection system and method |
US8015127B2 (en) * | 2006-09-12 | 2011-09-06 | New York University | System, method, and computer-accessible medium for providing a multi-objective evolutionary optimization of agent-based models |
US8110812B2 (en) * | 2006-10-25 | 2012-02-07 | Soreq Nuclear Research Center | Method and system for detecting nitrogenous materials via gamma-resonance absorption (GRA) |
US7492862B2 (en) * | 2007-01-17 | 2009-02-17 | Ge Homeland Protection, Inc. | Computed tomography cargo inspection system and method |
US7706502B2 (en) * | 2007-05-31 | 2010-04-27 | Morpho Detection, Inc. | Cargo container inspection system and apparatus |
-
2008
- 2008-05-29 US US12/129,383 patent/US8094874B2/en not_active Expired - Fee Related
- 2008-05-29 US US12/129,055 patent/US20090052622A1/en not_active Abandoned
- 2008-05-29 US US12/129,036 patent/US20090003651A1/en not_active Abandoned
- 2008-05-29 US US12/129,371 patent/US20090052762A1/en not_active Abandoned
- 2008-05-29 US US12/129,439 patent/US20080298544A1/en not_active Abandoned
- 2008-05-29 US US12/129,393 patent/US20090055344A1/en not_active Abandoned
- 2008-05-29 US US12/129,410 patent/US20090003699A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090010386A1 (en) * | 2003-09-15 | 2009-01-08 | Peschmann Kristian R | Methods and Systems for Rapid Detection of Concealed Objects Using Fluorescence |
US20070211248A1 (en) * | 2006-01-17 | 2007-09-13 | Innovative American Technology, Inc. | Advanced pattern recognition systems for spectral analysis |
US20090052622A1 (en) * | 2007-05-29 | 2009-02-26 | Peter Dugan | Nuclear material detection system |
US20090052732A1 (en) * | 2007-05-29 | 2009-02-26 | Peter Dugan | Material context analysis |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090003699A1 (en) * | 2007-05-29 | 2009-01-01 | Peter Dugan | User guided object segmentation recognition |
US20090052732A1 (en) * | 2007-05-29 | 2009-02-26 | Peter Dugan | Material context analysis |
US8094874B2 (en) * | 2007-05-29 | 2012-01-10 | Lockheed Martin Corporation | Material context analysis |
US9476923B2 (en) | 2011-06-30 | 2016-10-25 | Commissariat A L'energie Atomique Et Aux Energies Alternatives | Method and device for identifying a material by the spectral analysis of electromagnetic radiation passing through said material |
RU2721182C2 (en) * | 2014-09-10 | 2020-05-18 | Смитс Хейманн Сас | Determining degree of homogeneity in images |
CN104778444A (en) * | 2015-03-10 | 2015-07-15 | 公安部交通管理科学研究所 | Method for analyzing apparent characteristic of vehicle image in road scene |
Also Published As
Publication number | Publication date |
---|---|
US20090052762A1 (en) | 2009-02-26 |
US8094874B2 (en) | 2012-01-10 |
US20090003699A1 (en) | 2009-01-01 |
US20090052622A1 (en) | 2009-02-26 |
US20090052732A1 (en) | 2009-02-26 |
US20090055344A1 (en) | 2009-02-26 |
US20080298544A1 (en) | 2008-12-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20090003651A1 (en) | Object segmentation recognition | |
US10325178B1 (en) | Systems and methods for image preprocessing to improve accuracy of object recognition | |
US9426449B2 (en) | Depth map generation from a monoscopic image based on combined depth cues | |
CN109685060B (en) | Image processing method and device | |
KR101932009B1 (en) | Image processing apparatus and method for multiple object detection | |
EP3101594A1 (en) | Saliency information acquisition device and saliency information acquisition method | |
CN109997351B (en) | Method and apparatus for generating high dynamic range images | |
US7970212B2 (en) | Method for automatic detection and classification of objects and patterns in low resolution environments | |
CN109363699B (en) | Method and device for identifying focus of breast image | |
EP2584529A2 (en) | Method of image processing and device therefore | |
US10452922B2 (en) | IR or thermal image enhancement method based on background information for video analysis | |
CN100566655C (en) | Be used to handle image to determine the method for picture characteristics or analysis candidate | |
US9940545B2 (en) | Method and apparatus for detecting anatomical elements | |
CN101061511A (en) | Detection and correction method for radiographic orientation | |
US9251418B2 (en) | Method of detection of points of interest in a digital image | |
US20130070997A1 (en) | Systems, methods, and media for on-line boosting of a classifier | |
US6353674B1 (en) | Method of segmenting a radiation image into direct exposure area and diagnostically relevant area | |
KR101423835B1 (en) | Liver segmentation method in a medical image | |
CN111160477A (en) | Image template matching method based on feature point detection | |
US6608915B2 (en) | Image processing method and apparatus | |
Mohan et al. | An intelligent recognition system for identification of wood species | |
CN112200775A (en) | Image definition detection method and device, electronic equipment and storage medium | |
EP0887769B1 (en) | Method of segmenting a radiation image into direct exposure area and diagnostically relevant area | |
Widyaningsih et al. | Optimization Contrast Enhancement and Noise Reduction for Semantic Segmentation of Oil Palm Aerial Imagery. | |
CN109977816B (en) | Information processing method, device, terminal and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: LOCKHEED MARTIN CORPORATION, MARYLAND Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DUGAN, PETER;FINCH, ROBERT L.;PARADIS, ROSEMARY D.;AND OTHERS;REEL/FRAME:021810/0201;SIGNING DATES FROM 20080627 TO 20081105 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |