CN112950664B - Target object positioning and labeling method and device based on sliding profile - Google Patents
Target object positioning and labeling method and device based on sliding profile Download PDFInfo
- Publication number
- CN112950664B CN112950664B CN202110349783.4A CN202110349783A CN112950664B CN 112950664 B CN112950664 B CN 112950664B CN 202110349783 A CN202110349783 A CN 202110349783A CN 112950664 B CN112950664 B CN 112950664B
- Authority
- CN
- China
- Prior art keywords
- target object
- section
- dimensional
- sliding
- profile
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000002372 labelling Methods 0.000 title claims abstract description 24
- 238000000034 method Methods 0.000 claims abstract description 31
- 238000012545 processing Methods 0.000 claims abstract description 27
- 230000000007 visual effect Effects 0.000 claims abstract description 27
- 238000002591 computed tomography Methods 0.000 claims abstract description 19
- 238000007689 inspection Methods 0.000 claims abstract description 16
- 238000003709 image segmentation Methods 0.000 claims description 22
- 238000009877 rendering Methods 0.000 claims description 17
- 238000010586 diagram Methods 0.000 claims description 8
- 238000004040 coloring Methods 0.000 claims description 6
- 238000009432 framing Methods 0.000 claims description 6
- 238000004590 computer program Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 2
- 238000007796 conventional method Methods 0.000 abstract description 3
- 238000002347 injection Methods 0.000 abstract 1
- 239000007924 injection Substances 0.000 abstract 1
- 238000012549 training Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000002601 radiography Methods 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000010420 art technique Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013170 computed tomography imaging Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/08—Volume rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10072—Tomographic images
- G06T2207/10081—Computed x-ray tomography [CT]
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computer Graphics (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
Abstract
The invention relates to a target object positioning and labeling method and device based on a sliding profile, belonging to the technical field of security inspection image processing, and the method comprises the following steps: carrying out CT scanning on the article to construct three-dimensional data; displaying a three-dimensional space body visual view capable of changing visual angles; determining the position and the breakpoint of the sliding section; generating a profile surface perpendicular to the profile axis based on the breakpoint; marking a target object based on the section surface; extracting a three-dimensional region of a target object; the target object is subjected to highlighting processing to obtain a target object image. According to the method and the device, the optimal position of the target object section in the section can be searched in a sliding mode by arranging a series of section pictures on the section axis, the three-dimensional profile of the target object is fitted through an algorithm and image emphasis processing is carried out, so that the target positioning speed and precision are improved, the problems of low automation degree, inaccurate target positioning and labeling, complicated and long-time operation and the like of the conventional method are solved, and the target object injection quality is effectively guaranteed.
Description
Technical Field
The invention belongs to the technical field of security inspection image processing, and particularly relates to a target object positioning and labeling method and device based on a sliding profile.
Background
The three-dimensional image can acquire more complete and visual object information than the two-dimensional image, so that the target object contained in the three-dimensional image or the content interested by the observer can be observed more easily.
In the security inspection field, a two-dimensional perspective image of a conventional perspective security inspection apparatus includes object information overlapped at a viewing angle thereof, and it is difficult to distinguish a single object therein, whereas a security inspection CT (Computed Tomography) apparatus may acquire a three-dimensional image of a scanned object.
At present, in the deep learning field, a large amount of labeled sample data is needed, and the accuracy of a model directly depends on the accuracy of labeling, so that a method for quickly and accurately positioning and labeling a target object in a three-dimensional image is needed.
In the field of security inspection, in the process of inserting dangerous goods (TIP) examination or daily security inspection, a security inspector also needs to mark objects in a three-dimensional Image.
However, unlike the direct framing of the object or the region of interest in the two-dimensional image, the three-dimensional representation of the object or the region of interest needs to be provided in the three-dimensional image, and the simple framing is difficult to delineate the complete object.
The existing target object positioning and labeling technology has the following defects:
the CT image is directly selected in a frame mode, and because the frame selection operation can only be performed on a two-dimensional view, the depth information of the view cannot be depicted, and the problem of dislocation positioning or overlarge positioning inevitably occurs. This requires analysis of all the spaces covered within the depth-extending region of the framed range, with or without changing the viewing angle, and further analysis of the target for comparison, selection, location and labeling. Particularly in the case of security packages containing multiple items, the prior art techniques are very crude in locating and marking the objects. Therefore, the final marked area may not only include parts of a plurality of objects or a plurality of articles, but also contain redundant spatial information outside the objects, and because the range of viewing angles is limited, the objects in the overlapped state cannot be accurately positioned, distinguished and marked. The use of such labeled results as sample data for machine learning training results in a low actual recognition rate of the trained model for the target.
In addition, in the prior art, because the problem of spatial dislocation cannot be overcome, a plurality of target objects in different spatial positions cannot be accurately positioned by a single operation of a user, so that the marking operation on a single target object can only be performed one by one to solve the problem of marking a plurality of target objects, and the workload is undoubtedly increased.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, an object of the present invention is to provide a method and an apparatus for positioning and labeling an object based on a sliding profile, which improve the accuracy of positioning the object and the efficiency of labeling. According to the method, a profile axis and a breakpoint are set on a certain dimension of a three-dimensional space body, a depth vector is automatically fused to generate a series of profile surfaces, and target object framing and image segmentation are performed on a better profile surface projection diagram to obtain a three-dimensional area occupied by a target object, so that the positioning and labeling of the target object are directly realized, and reliable training sample data are provided for the realization of related dangerous goods intelligent identification technology.
A target object positioning and labeling method and device based on a sliding profile comprises the following steps:
carrying out CT scanning on an article comprising a target object to obtain three-dimensional data of the article;
carrying out volume rendering and three-dimensional rendering processing on the three-dimensional data to obtain a three-dimensional space body visual image capable of changing a visual angle;
determining a section axis on the three-dimensional space body, and a sliding section position and a sliding step length based on the section axis;
generating a breakpoint based on the sliding step length; generating a profile on the three-dimensional space volume based on the breakpoint; wherein the profile plane is perpendicular to the profile axis;
marking a section area of the target object on a section plane containing a section of the target object;
executing an image segmentation algorithm on the section area of the target object, and acquiring a three-dimensional area occupied by the target object from the three-dimensional space body;
and performing highlight display processing on the three-dimensional area occupied by the target object, and performing picture interception and storage on the display image to obtain the positioned and labeled target object image.
Further, the three-dimensional data includes: the three-dimensional data includes: monoenergetic three-dimensional data, high-energy three-dimensional data, low-energy three-dimensional data, electron density three-dimensional data, and/or equivalent atomic number three-dimensional data.
Further, selecting one of three-dimensional axes of the three-dimensional space body as a section axis;
and arranging at least one sliding section on the section shaft, wherein the sliding section comprises a starting point, an end point and a sliding step length.
Further, generating break points in sequence by taking the sliding step length as an interval in the direction from the starting point to the end point of the sliding section, and stopping increasing the break points until the remaining length of the sliding section is not more than one sliding step length;
starting from each breakpoint, generating a profile plane perpendicular to the profile axis on the three-dimensional space body;
performing reverse coloring treatment on the selected section surface at the breakpoint in the corresponding layer in the three-dimensional space body, and simultaneously presenting a projection drawing of the selected section surface at the breakpoint; the projection map comprises an enlarged image generated by processing in a projection mode such as plane vertical, parallel beam or fan beam.
Further, the marking the target object cross-sectional area comprises:
selecting a projection diagram of a section of the object containing the section plane of the object; and selecting the section area of the target object by a frame, or drawing the approximate outline of the section area of the target object by using a line segment, or marking the position of the target object inside the section area of the target object and/or marking an unnecessary space range outside the section area of the target object.
Further, the executing an image segmentation algorithm to obtain a three-dimensional region occupied by the object from the three-dimensional space body includes:
acquiring a contour map of a section area of the target object on the section plane through the image segmentation algorithm, and acquiring a three-dimensional area occupied by the target object in the three-dimensional space body based on the contour map; or,
directly acquiring a three-dimensional region occupied by the target through the image segmentation algorithm based on the three-dimensional space body and the section region of the target;
the image segmentation algorithm comprises a region growing algorithm, an active contour algorithm or a graph segmentation algorithm.
Further, the highlighting process for the three-dimensional area occupied by the object includes: highlighting target object pixels, highlighting target object contours, and/or adding text labels.
A device for realizing a target object positioning and labeling method based on a sliding profile comprises a data processor, a memory, user input equipment, a CT security inspection component and a display component; wherein,
the data processor is operable to read data and computer program instructions stored in the memory, and manipulation instructions from the user input device, for performing the following operations:
controlling a CT security inspection component to scan the wrapped articles to obtain fault data of a scanning space, and fusing the fault data according to a three-dimensional space position to construct three-dimensional data;
performing volume rendering and three-dimensional rendering processing on the three-dimensional data to obtain a three-dimensional space body;
displaying a visual map of a three-dimensional volume of variable perspective on the display assembly;
registering a DR image with a particular directional visual projection of the three-dimensional volume of space;
drawing a profile axis on the three-dimensional space body, and setting a sliding section position and a sliding step length based on the profile axis;
generating a breakpoint based on the sliding step length; generating a profile on the three-dimensional space volume based on the breakpoint; wherein the profile plane is perpendicular to the profile axis;
carrying out reverse coloring treatment on the corresponding image layer of the selected profile surface at the breakpoint in the three-dimensional space body;
rendering a projection of the profile surface at the selected breakpoint on the display assembly; the projection drawing comprises an enlarged image processed by a plane vertical, parallel beam or fan beam and other projection modes;
determining a section area of the target object by executing the instruction for marking the target object;
executing an image segmentation processing program on the section area of the target object to obtain a section profile of the target object and a three-dimensional area occupied by the target object in the three-dimensional space body;
performing emphasis display processing on a three-dimensional area occupied by the target object;
and carrying out picture interception and storage operations on the display image.
Further, the execute mark target instruction includes:
selecting a projection diagram of a section of the object containing the section plane of the object; and (3) framing the section area of the target object on the section surface projection drawing, or drawing the approximate outline of the section area of the target object by using line segments, or marking the position of the target object inside the section area of the target object, and/or marking an redundant space range outside the section area of the target object.
Further, the highlighting process for the three-dimensional area occupied by the object includes: highlighting object pixels, highlighting object contours, and/or adding text labels.
The invention has the following beneficial effects:
according to the method and the device, the optimal position of the section of the target object in the section can be searched in a sliding mode by arranging a series of section pictures on the section axis, the target object area is directly positioned and marked on the selected section picture, the three-dimensional contour of the target object is fitted through an algorithm, and image emphasis processing is performed, so that the target searching efficiency is improved, and the problems of low automation degree, inaccurate target positioning and marking, high requirement on the technical level of a tester, long time consumption for complex operation and the like in the conventional method are solved; meanwhile, gross errors caused by inaccurate target object selection are fundamentally eliminated, the target object labeling quality is effectively guaranteed, and reliable training sample data are provided for the realization of related dangerous article intelligent identification technologies.
Drawings
The drawings, in which like reference numerals refer to like parts throughout, are for the purpose of illustrating particular embodiments only and are not to be considered limiting of the invention. It should be apparent that the drawings in the following description are only some of the embodiments described in the embodiments of the present invention, and that other drawings may be obtained by those skilled in the art.
FIG. 1 is a flowchart of a target object positioning and labeling method based on a sliding profile according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating the determination of breakpoint positions according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a target object according to an embodiment of the present invention.
Reference numerals:
1, a three-dimensional volume of space; 2, cutting the object surface; 3, sundries; 4, a target; 5, breaking points; 6, a sliding section; 7, cutting an object shaft; 8, the cross-sectional area of the target 4; and 9, marking a frame.
Detailed Description
In order that those skilled in the art will better understand the technical solutions in the embodiments of the present invention, the following detailed description of the preferred embodiments of the present invention is provided in conjunction with the accompanying drawings, which form a part of the present application and together with the embodiments of the present invention, serve to describe the principles of the present invention, but it should be understood that these descriptions are only illustrative and not intended to limit the scope of the present invention. It is to be understood that the described embodiments are only some, and not all, embodiments of the invention. All other embodiments that can be derived from the embodiments of the present invention by a person of ordinary skill in the art are intended to fall within the scope of the present invention.
Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure. The embodiments and features of the embodiments described below can be combined with each other and/or transposed relative to each other and to the order of the relationships, without conflict.
The invention provides a target object positioning and labeling method and device based on a sliding profile, and aims to solve the problems that the existing method is low in automation degree, inaccurate in target positioning and labeling, low in working efficiency, high in requirement on the technical level of testers, tedious in operation, long in time consumption and the like.
Method embodiment
One embodiment of the present invention discloses a target object positioning and labeling method based on a sliding profile, as shown in fig. 1, the method includes:
carrying out CT scanning on the article to construct three-dimensional data; displaying a three-dimensional space body visual image capable of changing visual angles; determining the position and the breakpoint of the sliding section; generating a profile surface perpendicular to the profile axis based on the breakpoint; marking a target object based on the section surface; extracting a three-dimensional region of a target object; the target object is subjected to highlight display processing.
Specifically, the method includes steps S1 to S7.
S1, carrying out CT scanning on an article to obtain three-dimensional data of the article.
The method comprises the steps of firstly placing wrapped articles (luggage) on a conveyor belt, enabling the conveyor belt to keep moving at a constant speed under the drive of a conveyor belt motor, enabling the articles to enter a CT scanning area for scanning at a constant speed, enabling a CT ray source to emit X-ray beams to transmit the articles, enabling a CT detector to receive attenuation signals transmitted by the articles and continuously transmitting received sensing signals into a data processor, enabling the data processor to reconstruct data, obtaining fault data of a scanning space, and fusing all the fault data together according to three-dimensional space positions to construct three-dimensional data.
Wherein the reconstructed fault data may comprise electron density information, equivalent atomic number, etc.
Optionally, the three-dimensional data comprises: monoenergetic three-dimensional data, high-energy three-dimensional data, low-energy three-dimensional data, electron density three-dimensional data, and/or equivalent atomic number three-dimensional data. The three-dimensional data can be a vector matrix of one kind of three-dimensional data or a vector matrix formed by fusing several kinds of three-dimensional data.
And S2, carrying out volume rendering and three-dimensional rendering processing on the three-dimensional data to obtain a three-dimensional space body visual image with a changeable visual angle.
And projecting the three-dimensional data onto a screen of display equipment through certain projection processing to obtain a three-dimensional space body visual image with a changeable visual angle, and performing image operations such as changing the visual angle, transparency and the like of the three-dimensional space body through user input equipment such as a mouse, a keyboard and the like.
Optionally, the projection process includes, but is not limited to, perspective projection, i.e. projection mode conforming to the original observation of human eye, or parallel projection.
And S3, determining a section axis, and a sliding section position and a sliding step length based on the section axis on the three-dimensional space body.
Rotating the three-dimensional space body through user input equipment such as a mouse and a keyboard to select a visual angle; setting the length, the depth and the height of the three-dimensional space body as an X axis, a Y axis and a Z axis respectively under the view angle; and selecting one axis from the X axis, the Y axis or the Z axis as a section axis. And arranging at least one sliding section on the section shaft, wherein the sliding section comprises a starting point, an end point and a sliding step length.
Illustratively, the X-axis is chosen as the profile axis, and the entire profile axis is chosen to be set as one slip segment. The starting point of the profile shaft is a sliding section starting point, the end point of the profile shaft is a sliding section end point, and the sliding step length is set to be m.
Illustratively, the different sliding sections may not coincide on the section axis, and may have a coincident part; the sliding step sizes of the different sliding sections can be set to be equal or unequal according to the required section interval density.
Preferably, in the process of determining the position of the preferred section plane, the preferred slide section and the breakpoint position need to be determined first. In order to speed up the positioning process, on a device having perspective imaging, for example, a security inspection device having both CT imaging (Computed Tomography) and DR imaging (Digital Radiography, i.e., digital Radiography system), the registration relationship between the DR image and the three-dimensional space from the CT three-dimensional data can be utilized to register the DR image and the visible projection view of the three-dimensional space in a specific direction, so that the DR image is regarded as a high-definition projection view of the three-dimensional space, and therefore, based on the position of the target object in the DR image, the general position of the target object in the three-dimensional space can be obtained, so as to quickly determine the cross-sectional axis, the sliding-segment position and the sliding step length of the three-dimensional space, and achieve the purpose of quickly determining the preferred cross-sectional surface position.
Exemplarily, as shown in fig. 2, in the three-dimensional space body 1, the object 3 is shielded by the foreign object 4. Under the current view angle, the position of the section axis 7 is determined, and the sliding section comprising a starting point, an end point and a sliding step length is determined on the section axis 7.
And S4, generating a breakpoint based on the sliding step length and a profile based on the breakpoint.
On one sliding section, break points are sequentially generated in the direction from the starting point to the ending point of the sliding section by taking the sliding step length as an interval, and the increase of the break points is stopped until the remaining length of the sliding section is not more than one sliding step length.
Starting from each breakpoint, generating a section object surface vertical to the section axis on the three-dimensional space body,
illustratively, the sliding segment is located on an X axis, the length of the sliding segment is L, the sliding step length is m, k break points are generated, and k sections of object surfaces perpendicular to the sliding segment are generated on the three-dimensional space body based on the k break points, wherein the k sections of object surfaces are parallel to a YZ plane.
At least one breakpoint is selected through a user input device such as a mouse, pixel highlighting processing such as reverse coloring is carried out on a corresponding layer of a profile surface at the selected breakpoint in the three-dimensional space body, and meanwhile a projection drawing of the profile surface at the selected breakpoint is presented on a display device.
Preferably, when the mouse slides over each breakpoint in sequence, a projection view of the sectional surface at the breakpoint that the mouse is contacting is presented on the display device in sequence.
The projection map comprises an enlarged image generated by processing in a projection mode such as plane vertical, parallel beam or fan beam.
Preferably, where multiple renderings are generated, the multiple renderings may be displayed on the screen in a grid or other arrangement. Each projection graph is mutually associated with the break points and the corresponding graph layers of the break points in the three-dimensional space body. The position of the sectional view can be deduced from the projection.
Illustratively, as shown in fig. 2, the break point 5 is generated based on the sliding step size, and the profile plane 2 is generated based on the break point 5.
And S5, marking a section area of the target object on the section surface containing the section of the target object.
A preferred sectional object plane containing a sectional plane of the object is selected, in which sectional plane projection the position of the object is determined by marking the sectional area of said object.
The preferred profile is that the profile of the target object has a relatively concise and clear target object contour region.
The marking of the cross-sectional area of the target object comprises directly framing the cross-sectional area range of the target object, or using line segments to outline the approximate cross-sectional area of the target object, or marking the position of the cross-sectional area by using shapes such as points, symbols, solid circles and the like in the cross-sectional area of the target object, and/or marking an extra space range by drawing cross marks outside the cross-sectional area of the target object.
Preferably, some or all of the tags may be revoked or redone.
Optionally, the steps S3-S5 may be repeated, and the operations of setting the skip segment and the breakpoint, marking, etc. may be performed multiple times.
And S6, extracting a three-dimensional area of the target object.
And for the area marked as the target object in the sectional view, acquiring the three-dimensional area occupied by the target object in the three-dimensional space body by adopting an image segmentation algorithm based on the marked position and the spatial continuity of the target object data points.
The method for acquiring the three-dimensional area occupied by the target object by adopting the image segmentation algorithm comprises the following two methods.
The first method comprises the following steps: based on the position of the mark, firstly, acquiring a profile contour map of the target object from a profile surface through the image segmentation algorithm; and acquiring a three-dimensional region occupied by the target object through the image segmentation algorithm by utilizing the continuity of the target object in the three-dimensional space based on the section profile of the target object.
The second method comprises the following steps: and based on the position of the mark, directly utilizing the continuity of the target object in the three-dimensional space and the target object area from the divided three-dimensional space body, and acquiring the three-dimensional area occupied by the target object through the image segmentation algorithm.
The image segmentation algorithm comprises: region growing (Region growing) algorithm, active Contour (Active Contour) algorithm and graph partitioning (graph Cut/graph Cut) algorithm.
Illustratively, a pixel marked at the position of the target object on the section is used as a seed point, and a region growing algorithm is adopted to acquire a three-dimensional region occupied by the target object.
Preferably, the cross-sectional area of the target object is marked in the multiple cross-sectional images, so that the extraction process of the three-dimensional area occupied by the target object can be further accelerated, and the accuracy is higher.
FIG. 3 is a schematic diagram of marking a target object according to an embodiment of the present invention.
Exemplarily, as shown in fig. 3, a profile plane 2 generated based on a breakpoint 5 is selected;
in the section surface 2, the sundries 3 and the target object 4 which are cut open show a section area in the section surface 2, wherein the section area of the target object 4 is 8;
and (3) making a marking frame 9 on the orthographic projection drawing of the section surface 2 to frame and select the position of the target object 4, then carrying out contour search on the space on two sides of the section surface through a finely positioned image segmentation algorithm at the position in the marking frame 9 according to the space continuity of the object, and finally obtaining the three-dimensional area occupied by the complete target object 4 in the three-dimensional space body.
And S7, performing highlighting display processing on the target object.
The three-dimensional area occupied by the object is further image processed to highlight the object.
The further image processing comprises: highlighting the target object pixels, highlighting the target object outline, and/or adding text labels.
When the target object is highlighted on the three-dimensional image, various visual angles can be selected, and the image under the visual angles is subjected to picture capturing and storing operations.
The captured picture can be used as a sample for training and testing a machine recognition system.
Device embodiment
The invention discloses a target object positioning and labeling device based on a sliding profile, which comprises a data processor, a memory, user input equipment, a display assembly, a CT security inspection assembly, a data interface and a power supply.
The data processor, the memory, the user input equipment, the display component, the CT security inspection component, the data interface and the power supply all adopt universal device equipment.
The data processor is electrically and/or wirelessly connected with the memory, the user input device, the display assembly, the CT security inspection assembly, the data interface and the power supply respectively.
The data processor is operable to read data and computer program instructions stored in the memory, and manipulation instructions from the user input device, for performing the following operations:
the data processor executes and controls the CT security inspection component to scan the wrapped articles, continuously transmits the received sensing signals into the data processor, and the data processor stores and reconstructs the data to obtain the fault data of a scanning space, and the fault data are fused together according to the three-dimensional space position to construct three-dimensional data.
Performing volume rendering and three-dimensional rendering processing on the three-dimensional data to obtain a three-dimensional space body; projecting the three-dimensional volume onto the display assembly through a projection process to form a visual representation of the three-dimensional volume; and image operations such as changing the visual angle and transparency of the three-dimensional space body can be carried out through the user input device.
Registering a DR image with a particular directional visual projection view of the three-dimensional volume of space;
drawing a profile axis on the three-dimensional space body through user input equipment, and setting a sliding section position and a sliding step length based on the profile axis;
generating a breakpoint based on the sliding step length; generating a profile on the three-dimensional space volume based on the breakpoint; wherein the profile plane is perpendicular to the profile axis;
carrying out reverse coloring treatment on the corresponding image layer of the profile surface at the selected breakpoint in the three-dimensional space body;
rendering a projection of the profile surface at the selected breakpoint on the display assembly; the projection drawing comprises an enlarged drawing processed by a plane vertical, parallel beam or fan beam and other projection modes;
determining a target profile region by executing a mark target instruction, comprising:
selecting a projection view of a section of the object containing a section plane of the object through the user input device; on the section surface projection drawing, a section area of the target object is directly selected through the user input device, or a line segment is used for sketching the approximate outline of the section area of the target object, or the position of the target object is determined through marking points, signs, solid circles and other shapes inside the section area of the target object, and/or a cross mark is drawn outside the section area of the target object to mark an unnecessary space range.
And executing an image segmentation processing program on the area marked as the section of the target object in the section map based on the marked position and the spatial continuity of the target object data points, and acquiring a section profile map of the target object and a three-dimensional area occupied by the target object in the three-dimensional space body. The specific partitioning processing procedure refers to the image segmentation processing procedure described in step S6 in the method embodiment.
And performing highlighting processing on a three-dimensional area occupied by the target object so as to highlight the target object.
The highlight display processing includes: highlighting the target object pixels, highlighting the target object outline, and/or adding text marks.
And selecting various visual angles through user input equipment, and performing picture capturing and storing operations on images under the visual angles.
The method and the device can directly position and mark the target object area on the profile map by using the profile map mode under a single visual angle, fit the three-dimensional profile of the target object through an algorithm and perform image emphasis processing, thereby reducing target search, avoiding multiple frame selection operations, and solving the problems of low automation degree, inaccurate target positioning and marking, low working efficiency, high requirement on the technical level of testers, complex operation, long time consumption and the like of the conventional method; meanwhile, gross errors caused by inaccurate target object selection are fundamentally eliminated, the target object marking quality is effectively guaranteed, and reliable training data are provided for the realization of related dangerous article intelligent identification technologies.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the embodiments of the present invention, and not for limiting the same, and although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and any changes and substitutions that may be easily made by those skilled in the art within the technical scope of the present invention shall be included in the scope of the present invention.
Claims (7)
1. A target object positioning and labeling method based on a sliding profile is characterized by comprising the following steps:
performing CT scanning on an article comprising a target object to obtain three-dimensional data of the article;
performing volume rendering and three-dimensional rendering processing on the three-dimensional data to obtain a three-dimensional space body visual image with a changeable visual angle; the projection mode adopted by the three-dimensional image comprises perspective projection and parallel projection;
determining a profile axis on the three-dimensional space body, and a sliding section position and a sliding step length based on the profile axis, comprising: selecting one of three-dimensional axes of the three-dimensional space body as a section axis;
arranging at least one sliding section on the section shaft, wherein the sliding section comprises a starting point, an end point and a sliding step length; setting equal or unequal sliding step lengths of different sliding sections according to the required section interval density;
generating a breakpoint based on the sliding step length; generating a profile on the three-dimensional volume based on the breakpoint, including:
sequentially generating break points in the direction from the starting point to the ending point of the sliding section by taking the sliding step length as an interval, and stopping increasing the break points until the remaining length of the sliding section is not more than one sliding step length;
starting from each breakpoint, generating a profile plane perpendicular to the profile axis on the three-dimensional space body;
performing reverse coloring treatment on the profile surface at the selected breakpoint on the corresponding layer in the three-dimensional space body, and simultaneously presenting a projection drawing of the profile surface at the selected breakpoint; the projection map comprises an enlarged image processed by a plane vertical, parallel beam or fan beam projection mode; the projection graph is mutually associated with the break points and the corresponding graph layers of the break points in the three-dimensional space body, and the position of the section graph can be reversely deduced from the projection graph;
marking a section area of the target object on a section surface containing a section of the target object; the method comprises the following steps:
selecting a section plane projection diagram containing a section plane of the target object; selecting the section area of the target object in a frame mode, or drawing the approximate outline of the section area of the target object by using a line segment, or marking the position of the target object inside the section area of the target object and/or marking an excess space range outside the section area of the target object;
executing an image segmentation algorithm on the section area of the target object, and acquiring a three-dimensional area occupied by the target object from the three-dimensional space body;
and performing highlight display processing on the three-dimensional area occupied by the target object, and performing picture interception and storage on the display image to obtain the positioned and labeled target object image.
2. The sliding-profile-based target object positioning and labeling method of claim 1, wherein the three-dimensional data comprises: monoenergetic three-dimensional data, high-energy three-dimensional data, low-energy three-dimensional data, electron density three-dimensional data, and/or equivalent atomic number three-dimensional data.
3. The method according to claim 1, wherein the performing an image segmentation algorithm to obtain a three-dimensional region occupied by the object from the three-dimensional space comprises:
acquiring a contour map of a section area of the target object on the section plane through the image segmentation algorithm, and acquiring a three-dimensional area occupied by the target object in the three-dimensional space body based on the contour map; or,
directly acquiring a three-dimensional region occupied by the target through the image segmentation algorithm based on the three-dimensional space body and the section region of the target;
the image segmentation algorithm comprises a region growing algorithm, an active contour algorithm or a graph segmentation algorithm.
4. The method for positioning and labeling the target object based on the sliding section according to claim 3, wherein the highlighting process for the three-dimensional area occupied by the target object comprises: highlighting target object pixels, highlighting target object contours, and/or adding text labels.
5. An apparatus for implementing the sliding profile-based target object positioning and labeling method of any one of claims 1 to 4, comprising a data processor, a memory, a user input device, a CT security inspection component and a display component; wherein,
the data processor is operable to read data and computer program instructions stored in the memory, and manipulation instructions from the user input device, for performing the following operations:
controlling a CT security inspection component to scan the wrapped articles to obtain fault data of a scanning space, and fusing the fault data according to a three-dimensional space position to construct three-dimensional data;
performing volume rendering and three-dimensional rendering processing on the three-dimensional data to obtain a three-dimensional space body;
displaying a visual map of a three-dimensional volume of variable perspective on the display assembly;
registering a DR image with a particular directional visual projection view of the three-dimensional volume of space;
drawing the section axis on the three-dimensional space body, and setting the sliding section position and the sliding step length based on the section axis, including: selecting one of three-dimensional axes of the three-dimensional space body as a section axis;
arranging at least one sliding section on the section shaft, wherein the sliding section comprises a starting point, an end point and a sliding step length;
generating a breakpoint based on the sliding step length; generating a profile on the three-dimensional volume based on the breakpoint, including:
sequentially generating break points in the direction from the starting point to the end point of the sliding section by taking the sliding step length as an interval, and stopping increasing the break points until the residual length of the sliding section is not more than one sliding step length;
starting from each breakpoint, generating a profile plane perpendicular to the profile axis on the three-dimensional space body;
carrying out reverse coloring treatment on the corresponding image layer of the profile surface at the selected breakpoint in the three-dimensional space body;
rendering a projection of the profile surface at the selected breakpoint on the display assembly; the projection drawing comprises an enlarged image processed by a plane vertical, parallel beam or fan beam and other projection modes;
determining a section area of the target object by executing the instruction for marking the target object;
executing an image segmentation processing program on the section area of the target object to obtain a section profile of the target object and a three-dimensional area occupied by the target object in the three-dimensional space body;
performing highlight display processing on a three-dimensional area occupied by the target object;
and carrying out picture interception and storage operations on the display image.
6. The apparatus of claim 5, wherein the execute mark target instruction comprises:
selecting a section plane projection diagram containing a section plane of the target object; and (3) framing the section area of the target object on the section surface projection drawing, or drawing the approximate outline of the section area of the target object by using line segments, or marking the position of the target object inside the section area of the target object, and/or marking an redundant space range outside the section area of the target object.
7. The apparatus of claim 6, wherein the highlighting of the three-dimensional area occupied by the object comprises: highlighting target object pixels, highlighting target object contours, and/or adding text labels.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110349783.4A CN112950664B (en) | 2021-03-31 | 2021-03-31 | Target object positioning and labeling method and device based on sliding profile |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110349783.4A CN112950664B (en) | 2021-03-31 | 2021-03-31 | Target object positioning and labeling method and device based on sliding profile |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112950664A CN112950664A (en) | 2021-06-11 |
CN112950664B true CN112950664B (en) | 2023-04-07 |
Family
ID=76231593
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110349783.4A Active CN112950664B (en) | 2021-03-31 | 2021-03-31 | Target object positioning and labeling method and device based on sliding profile |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112950664B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116453063B (en) * | 2023-06-12 | 2023-09-05 | 中广核贝谷科技有限公司 | Target detection and recognition method and system based on fusion of DR image and projection image |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105785462A (en) * | 2014-06-25 | 2016-07-20 | 同方威视技术股份有限公司 | Method for locating target in three-dimensional CT image and security check CT system |
CN109975335A (en) * | 2019-03-07 | 2019-07-05 | 北京航星机器制造有限公司 | A kind of CT detection method and device |
CN112288888A (en) * | 2020-10-26 | 2021-01-29 | 公安部第一研究所 | Method and device for labeling target object in three-dimensional CT image |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111340742B (en) * | 2018-12-18 | 2024-03-08 | 深圳迈瑞生物医疗电子股份有限公司 | Ultrasonic imaging method and equipment and storage medium |
CN111161129B (en) * | 2019-11-25 | 2021-05-25 | 佛山欧神诺云商科技有限公司 | Three-dimensional interaction design method and system for two-dimensional image |
-
2021
- 2021-03-31 CN CN202110349783.4A patent/CN112950664B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105785462A (en) * | 2014-06-25 | 2016-07-20 | 同方威视技术股份有限公司 | Method for locating target in three-dimensional CT image and security check CT system |
CN109975335A (en) * | 2019-03-07 | 2019-07-05 | 北京航星机器制造有限公司 | A kind of CT detection method and device |
CN112288888A (en) * | 2020-10-26 | 2021-01-29 | 公安部第一研究所 | Method and device for labeling target object in three-dimensional CT image |
Also Published As
Publication number | Publication date |
---|---|
CN112950664A (en) | 2021-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10950026B2 (en) | Systems and methods for displaying a medical image | |
RU2599277C1 (en) | Computed tomography system for inspection and corresponding method | |
US20090309874A1 (en) | Method for Display of Pre-Rendered Computer Aided Diagnosis Results | |
CN107405126B (en) | Retrieving corresponding structures of pairs of medical images | |
US8363048B2 (en) | Methods and apparatus for visualizing data | |
CN102395318B (en) | Diagnosis support apparatus and diagnosis support method | |
Mori et al. | Automated extraction and visualization of bronchus from 3D CT images of lung | |
US20130002646A1 (en) | Method and system for navigating, segmenting, and extracting a three-dimensional image | |
US8041094B2 (en) | Method for the three-dimensional viewing of tomosynthesis images in mammography | |
CN108717700B (en) | Method and device for detecting length of long diameter and short diameter of nodule | |
US20130050208A1 (en) | Method and system for navigating, segmenting, and extracting a three-dimensional image | |
CN103456002A (en) | Methods and system for displaying segmented images | |
CN101259026A (en) | Method and apparatus for tracking points in an ultrasound image | |
US20090219289A1 (en) | Fast three-dimensional visualization of object volumes without image reconstruction by direct display of acquired sensor data | |
EP3112852B1 (en) | Method for positioning target in three-dimensional ct image and security check system | |
CN112950664B (en) | Target object positioning and labeling method and device based on sliding profile | |
CN110993067A (en) | Medical image labeling system | |
US20130050207A1 (en) | Method and system for navigating, segmenting, and extracting a three-dimensional image | |
CN112907670B (en) | Target object positioning and labeling method and device based on profile | |
CN111803128A (en) | Mammary tissue elastography method, device, equipment and medium | |
Sveinsson et al. | ARmedViewer, an augmented-reality-based fast 3D reslicer for medical image data on mobile devices: A feasibility study | |
JPH0981786A (en) | Three-dimensional image processing method | |
CN116188385A (en) | Target object stripping method and device in three-dimensional CT image and security inspection CT system | |
US6445762B1 (en) | Methods and apparatus for defining regions of interest | |
CN113643361A (en) | Target area positioning method, apparatus, device, medium, and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |