CN114612467A - Target object marking method and system of three-dimensional CT image - Google Patents

Target object marking method and system of three-dimensional CT image Download PDF

Info

Publication number
CN114612467A
CN114612467A CN202210406748.6A CN202210406748A CN114612467A CN 114612467 A CN114612467 A CN 114612467A CN 202210406748 A CN202210406748 A CN 202210406748A CN 114612467 A CN114612467 A CN 114612467A
Authority
CN
China
Prior art keywords
dimensional
area
point
image
intersection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210406748.6A
Other languages
Chinese (zh)
Inventor
郝世昱
李斌
孔维武
李治国
李东
李永请
陈力
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Research Institute of Ministry of Public Security
Beijing Zhongdun Anmin Analysis Technology Co Ltd
Original Assignee
First Research Institute of Ministry of Public Security
Beijing Zhongdun Anmin Analysis Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Research Institute of Ministry of Public Security, Beijing Zhongdun Anmin Analysis Technology Co Ltd filed Critical First Research Institute of Ministry of Public Security
Priority to CN202210406748.6A priority Critical patent/CN114612467A/en
Publication of CN114612467A publication Critical patent/CN114612467A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1407General aspects irrespective of display type, e.g. determination of decimal point position, display with fixed or driving decimal point, suppression of non-significant zeros
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Geometry (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Analysing Materials By The Use Of Radiation (AREA)

Abstract

The invention discloses a target object marking method and a target object marking system of a three-dimensional CT image. The method comprises the following steps: according to the three-dimensional CT image of the detected object, acquiring the two-dimensional area selected by a security inspector under one visual angle of the three-dimensional CT image and the dangerous object type of the target object; acquiring a three-dimensional region corresponding to the two-dimensional region according to a perspective projection principle based on a preset projection angle and direction; acquiring a plurality of intersection points of the three-dimensional region and the detected object; drawing a frame mark body according to the intersection points so as to represent the area of the target object; and displaying corresponding category prompts on the online frame mark body according to the categories of the dangerous goods. The method can fully utilize the parallel capability of the graphic hardware, does not need pretreatment, early identification and deep test, realizes the target object labeling treatment with the minimum algorithm complexity and time overhead, can complete the labeling of the dangerous goods once through single operation, does not depend on the identification result, and is more friendly in the whole labeling process.

Description

Target object marking method and system of three-dimensional CT image
Technical Field
The invention relates to a target object marking method of a three-dimensional CT image, and also relates to a corresponding target object marking system, belonging to the technical field of security inspection.
Background
CT (Computed Tomography) technology is widely used in medical, industrial, military, geological, and other industries because of its great advantages in detecting substances. With the continuous improvement of the requirements of airports at home and abroad on the security inspection precision, the CT security inspection equipment is gradually replacing the traditional perspective security inspection equipment, and the three-dimensional CT image is also gradually replacing the two-dimensional perspective image, which becomes the main basis for judging the image of the security inspector. Therefore, in a three-dimensional CT image, a fast and accurate target labeling method is particularly important.
The existing target object labeling method mainly realizes the labeling of the target object by performing one-time frame selection or two-time frame selection in an image. The method is characterized in that the target object is marked by one-time frame selection, data preprocessing is mostly relied on, the packaged objects are identified and classified in advance, and the objects which the security inspector wants to mark are identified after the frame selection.
In chinese patent No. ZL 201410795060.7, a method for locating an object in a three-dimensional CT image and a security check CT system are disclosed. The method comprises the following steps: displaying a three-dimensional CT image; receiving a user selection of at least one region of the three-dimensional CT image at a viewing angle; generating a set of at least one three-dimensional object in a depth direction based on the selection; a target object is determined from the set. By using the technical scheme, a user can conveniently and rapidly mark the suspect in the CT image.
However, the technical scheme has the following disadvantages: 1. the point cloud information of the outer surface of the object needs to be extracted and a point cloud cluster sequence is formed, and the calculation time cost in the three-dimensional space is high; 2. the method comprises the steps that a preset standard is set to judge that uncontrollable factors exist in a selected area, and the change of package content may bring standard deviation under different use environments, so that objects which are expected to be marked by an interpreter cannot be accurately selected to a certain extent, and the risk of label missing exists; 3. the marking result is not in accordance with the quadrangular frustum pyramid shape of the projection model, and the marking will of the interpreter can not be accurately expressed.
Disclosure of Invention
The invention provides a method for marking a target object of a three-dimensional CT image.
Another object of the present invention is to provide a system for marking a target object in a three-dimensional CT image.
In order to achieve the technical purpose, the invention adopts the following technical scheme:
according to a first aspect of the embodiments of the present invention, there is provided a method for labeling a target in a three-dimensional CT image, including the steps of:
s1, obtaining a two-dimensional area selected by a security inspector under one visual angle of the three-dimensional CT image and the dangerous goods category of a target object according to the three-dimensional CT image drawn by the three-dimensional CT fault data of the detected goods;
step S2, acquiring a three-dimensional area corresponding to the two-dimensional area according to a perspective projection principle based on a preset projection angle and direction;
step S3, acquiring a plurality of intersection points of the three-dimensional area and the detected article;
step S4, drawing a frame mark body according to the plurality of intersection points for representing the area of the target object;
and step S5, displaying a corresponding category prompt on the frame mark body according to the category of the dangerous goods.
Preferably, the step S1 includes the following sub-steps:
s11, obtaining three-dimensional CT fault data of the detected article at a certain moment, and drawing a corresponding three-dimensional CT image by adopting a ray projection algorithm;
and step S12, receiving a two-dimensional area framed and selected by a security inspector from a screen under one visual angle of the three-dimensional CT image, and acquiring the dangerous goods category of the target object judged by the security inspector.
Preferably, the step S2 includes the following sub-steps:
s21, classifying the three-dimensional CT fault data according to the coordinate conversion result when the next refreshing redrawing operation is carried out on the three-dimensional CT image aiming at the two-dimensional area framed and selected by the security inspector from the screen;
and step S22, drawing each point falling in the two-dimensional area as an undetermined point, wherein all the undetermined points form the three-dimensional area.
Preferably, in step S11, the method further includes the following steps:
taking a preselected point on a screen as a circle center, and taking a set length as a radius to form a circular area as an alternative point area of the preselected point;
aiming at the candidate point area of the preselected point, when a corresponding three-dimensional CT image is drawn by adopting a ray casting algorithm, all points falling in the candidate point area after coordinate transformation are taken as candidate points;
calculating a coordinate mean value based on the position coordinates of all the alternative points in the alternative point area on the screen;
performing position compensation on the position coordinates of the preselected point on the screen by using the coordinate mean value; wherein, the two-dimensional area is formed by a plurality of preselection points in a surrounding manner.
Preferably, the step S3 includes the following sub-steps:
step S31, taking the projection line emitted from the preselected point according to a preset projection angle and direction as the edge projection line of the three-dimensional area;
step S32, acquiring an incident intersection point intersected with the detected article when the edge projection line enters the detected article;
step S33, acquiring an emergent intersection point intersected with the detected article when the edge projection line leaves the detected article;
and repeating the steps S31 to S33 until the incident intersection point and the emergent intersection point of all the edge projection lines and the detected article are acquired.
Preferably, the step S4 includes the following sub-steps:
connecting an incident intersection point and an emergent intersection point of the same edge projection line and the detected article;
connecting two adjacent edge projection lines with incident intersection points of the intersected detected articles respectively;
and connecting the two adjacent edge projection lines with the intersected emergent intersection points of the detected object respectively to draw the frame marking body.
Preferably, aiming at the candidate point region of the preselected point, an incident intersection point candidate region and an emergent intersection point candidate region corresponding to the detected article are obtained in the three-dimensional region;
determining the position coordinates of the incidence intersection point in the three-dimensional area based on the coordinate mean value of the incidence intersection point standby area;
and determining the position coordinates of the emergent intersection point in the three-dimensional area based on the coordinate mean value of the emergent intersection point candidate area.
Preferably, the mean value of coordinates based on the candidate area of the incidence intersection is replaced by: based on a weighted mean or limit of the incident intersection candidate region;
and replacing the coordinate mean value based on the exit intersection point candidate region by: based on a weighted mean or limit value of the exit intersection candidate region.
Preferably, the step S2 is replaced by:
and acquiring a three-dimensional region corresponding to the two-dimensional region according to an orthographic projection principle based on a preset projection angle and a preset projection direction.
According to a second aspect of the embodiments of the present invention, there is provided an object labeling system for three-dimensional CT images, comprising a processor and a memory, wherein the processor reads a computer program or instructions in the memory, and is configured to:
s1, obtaining a two-dimensional area selected by a security inspector under one visual angle of the three-dimensional CT image and the dangerous goods category of a target object according to the three-dimensional CT image drawn by the three-dimensional CT fault data of the detected goods;
step S2, acquiring a three-dimensional area corresponding to the two-dimensional area according to a perspective projection principle based on a preset projection angle and direction;
step S3, acquiring a plurality of intersection points of the three-dimensional area and the detected article;
step S4, drawing a frame body according to the plurality of intersections, so as to represent the area of the target object;
and step S5, displaying a corresponding category prompt on the frame mark body according to the category of the dangerous goods.
Compared with the prior art, the method and the system for marking the target object of the three-dimensional CT image provided by the invention have the advantages that the three-dimensional area where the target object is located is determined in the process of refreshing and redrawing the three-dimensional CT image by using the ray casting algorithm, so that the method can fully utilize the parallel capability of graphic hardware, does not need to carry out pretreatment, early identification and depth test, and realizes the purpose of completing the marking processing of the target object with the minimum algorithm complexity and time overhead. In addition, compared with the traditional method, the time efficiency is greatly improved, the dangerous goods can be marked once through single operation, the identification result is not depended on, the marking success rate is improved to one hundred percent, and the whole marking process is more friendly. .
Drawings
FIG. 1 is a flowchart of a method for marking a target object in a three-dimensional CT image according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a three-dimensional imaging algorithm in a method for labeling a target object in a three-dimensional CT image according to an embodiment of the present invention;
fig. 3 is a schematic diagram illustrating a perspective projection principle in a target object labeling method for a three-dimensional CT image according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating a candidate region determined in the method for marking a target object in a three-dimensional CT image according to an embodiment of the present invention;
fig. 5 is a schematic diagram illustrating a three-dimensional region determined for a two-dimensional region in the method for marking a target object in a three-dimensional CT image according to an embodiment of the present invention;
fig. 6 is a schematic diagram illustrating determination of an incident intersection point and an exit intersection point in a target object marking method for a three-dimensional CT image according to an embodiment of the present invention;
fig. 7 is a schematic diagram illustrating a determination of position coordinates of an incident intersection point and an exit intersection point in a three-dimensional region in a target object marking method for a three-dimensional CT image according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of a frame marker drawn in the method for marking a target object in a three-dimensional CT image according to an embodiment of the present invention;
fig. 9 is a schematic diagram illustrating dangerous goods category labeling in the method for labeling a target object in a three-dimensional CT image according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a target object labeling system of a three-dimensional CT image according to an embodiment of the present invention.
Detailed Description
The technical contents of the invention are specifically described in the following with reference to the accompanying drawings and specific embodiments.
In order to complete the fast and accurate labeling of a target object in a three-dimensional CT image with minimal algorithm complexity and time overhead and ensure that the whole labeling process of the target object is more friendly, as shown in fig. 1, an embodiment of the invention provides a target object labeling method of a three-dimensional CT image, which comprises the following steps:
and step S1, acquiring the two-dimensional area selected by the security inspector and the dangerous goods category of the target object under one visual angle of the three-dimensional CT image according to the three-dimensional CT image drawn by the three-dimensional CT fault data of the detected goods.
Specifically, the method comprises the steps of S11-S12:
and step S11, obtaining three-dimensional CT fault data of the detected article at a certain moment, and drawing a corresponding three-dimensional CT image by adopting a ray projection algorithm.
In a security check site, when an article carried by a person to be detected is collected by using a CT security check device, an analog signal is obtained by detecting X-rays transmitted through the article to be detected (such as a trunk), and the analog signal is converted into a digital signal, so that three-dimensional CT tomographic data is output to a data processing device. And the data processing device draws a corresponding three-dimensional CT image by utilizing a ray projection algorithm in an image space volume drawing technology according to the received three-dimensional CT fault data of the detected object at a certain moment. The data processing device may be a desktop computer, a notebook computer, or the like.
Taking a desktop computer as an example as a data processing device, when the computer receives three-dimensional CT tomographic data output by CT security check equipment, according to the imaging principle of an image space volume rendering technology, as shown in FIG. 2, starting from each pixel point on a computer screen, emitting a ray along a sight line direction, when the ray intersects with the three-dimensional CT tomographic data, performing equidistant sampling on an intersecting part, and calculating the color value and the opacity of a sampling point by utilizing interpolation; and then synthesizing sampling points on the ray according to the sequence from front to back or from back to front, and calculating the color value of a pixel point on the two-dimensional screen corresponding to the ray so as to obtain the color of the three-dimensional CT image finally presented by the screen.
The three-dimensional CT image is refreshed and redrawn for several times per second to adapt to various operations of a security inspector, and the three-dimensional CT image is refreshed and redrawn for 30 times, 60 times or more generally per second; and performing a series of coordinate transformation on the three-dimensional CT fault data and finally mapping the three-dimensional CT fault data to corresponding screen pixels according to the current spatial position in each refreshing redrawing operation.
Further, step S11 includes the steps of:
and step S111, taking a preselected point on the screen as a circle center, and taking a set length as a radius to form a circular area as a candidate point area of the preselected point.
Since the point of the three-dimensional CT tomographic data falling on the screen after a series of coordinate transformations may not be precisely intersected with the preselected point during the framing, as shown in fig. 4, in order to ensure that the closest point is obtained, a neighborhood (i.e., alternative point region) of the preselected point may be selected, as shown by a circle in fig. 4.
And S112, aiming at the candidate point area of the preselected point, when drawing a corresponding three-dimensional CT image by adopting a ray casting algorithm, taking all points which are subjected to coordinate transformation and then fall in the candidate point area as the candidate points.
For this candidate point region, all points falling in the circle after transformation are marked as candidate points, i.e., the set of points filled with black in fig. 4.
And step S113, calculating a coordinate mean value based on the position coordinates of all the alternative points in the alternative point area on the screen.
In the embodiment of the invention, the coordinate mean value of all the alternative points in the alternative point area is calculated by using a mean value method, and in other embodiments, a weighted mean value method or an extreme value method can be adopted for calculation.
And S114, compensating the position of the position coordinate of the preselected point on the screen by using the coordinate mean value.
And comparing the calculated coordinate mean value with the position coordinates of the preselected point on the screen, so that the preselected point can be subjected to position compensation, and the position coordinates of the preselected point can be subjected to fine adjustment, so that the preselected point corresponds to the point on the screen of the three-dimensional CT fault data after a series of coordinate conversion. The two-dimensional area in the embodiment of the invention is formed by surrounding a plurality of preselected points.
Therefore, while drawing a three-dimensional CT image, the position compensation of the preselected point can be realized through steps S111 to S114, thereby improving the accuracy of the correspondence between the two-dimensional region and the three-dimensional region (as shown in fig. 5).
And step S12, receiving the two-dimensional area framed and selected by the security inspector from the screen under one visual angle of the three-dimensional CT image, and acquiring the dangerous goods category of the target object judged by the security inspector.
The process of emitting rays from each pixel point on the screen is related to a projection mode, the invention adopts perspective projection, as shown in fig. 3, the perspective projection is the projection mode which is most in line with the principle of human eye observation, the area which can be observed by human eyes is defined as a view cone, and the range between a near plane and a far plane is a visible range.
The three-dimensional CT image of the detected object displayed on the screen comprises a plurality of areas with different colors, and the different colors represent different object types. In security inspection equipment, inorganic substances such as cutter metals and the like are generally represented in blue; orange represents organic matter, such as melons, fruits, dried fruits, dairy products, livestock products and the like; green represents the mixture; black is the color that appears in the object that is not transparent to wear, and is mostly heavy metal and thick object.
The security inspector frames out a two-dimensional region in the three-dimensional CT image of the inspected object displayed on the screen according to the existing known object discrimination standard. As shown in fig. 5, the security inspector may frame a rectangular region in the three-dimensional CT image of the inspected object displayed on the screen by a mouse as a two-dimensional region corresponding to the target object. The rectangular area in the embodiment of the invention is formed by surrounding four preselected points, when in specific frame selection, one preselected point can be clicked from the upper left corner of the screen (the preselected point can perform automatic compensation of the coordinate position), the other preselected point which is selected to the lower right corner in the frame selection is lifted, and the other two preselected points are automatically determined according to the rectangular area, so that the rectangular two-dimensional area is formed.
In addition, the target object mentioned in the embodiment of the invention is generally a dangerous article which endangers public safety. After the security inspector selects the two-dimensional area, the security inspector needs to determine the type of the dangerous goods of the target object according to a known target object determination standard, for example: the target object is a knife, a gun, or the like.
And step S2, acquiring a three-dimensional region corresponding to the two-dimensional region according to a perspective projection principle based on a preset projection angle and direction.
Specifically, the method comprises the steps of S21-S22:
and step S21, classifying the three-dimensional CT fault data according to the coordinate conversion result when the three-dimensional CT image is refreshed and redrawn next time aiming at the two-dimensional area selected by the security inspector from the screen.
And the data processing device receives the two-dimensional region framed and selected by the security inspector from the three-dimensional CT image so as to classify the three-dimensional CT fault data according to the coordinate conversion result when the three-dimensional CT image is refreshed and redrawn next time.
And step S22, drawing each point falling in the two-dimensional area as an undetermined point, wherein all the undetermined points form a three-dimensional area.
As shown in fig. 5, each point falling in the two-dimensional region is marked as an undetermined point, that is, the undetermined points contributing to the two-dimensional region are screened out, and all the undetermined points form the three-dimensional region.
If the determination of the three-dimensional area is based on the reverse calculation of the two-dimensional area, the coordinate conversion needs to be calculated reversely, so that the time overhead and the algorithm complexity are high.
Furthermore, in another embodiment, step S2 may also be replaced with:
and acquiring a three-dimensional region corresponding to the two-dimensional region according to an orthographic projection principle based on a preset projection angle and a preset projection direction. The specific forward projection principle is common knowledge in the art and will not be described herein.
Step S3 is to acquire a plurality of intersections of the three-dimensional region and the inspected article.
Specifically, the method comprises steps S31-S33:
and step S31, taking the projection line emitted from the preselected point according to the preset projection angle and direction as the edge projection line of the three-dimensional area.
Specifically, a preselected point selected by a security inspector when the security inspector selects a two-dimensional area is used as an emission point, a projection line is emitted according to a preset projection angle and direction, the position of the projection line is the edge of the three-dimensional area, and the projection line is defined as an edge projection line.
Step S32 is to obtain an incident intersection point intersecting the detected article when the edge projection line enters the detected article.
When the edge projection line enters the object to be inspected in the incident direction of the edge projection line (i.e., the direction indicated by the arrow in fig. 6), the point intersecting the object to be inspected is defined as the incident intersection point (e.g., point 1 in fig. 6).
And step S33, acquiring an emergent intersection point intersecting the detected article when the edge projection line leaves the detected article.
When the edge projection line is separated from the object to be inspected in the incident direction of the edge projection line (i.e., the direction indicated by the arrow in fig. 6), a point intersecting the object to be inspected is defined as an exit intersection point (e.g., point 5 in fig. 6).
And repeating the steps S31 to S33 until the incident intersection point and the emergent intersection point of all the edge projection lines and the detected article are acquired. As shown in fig. 6, in the embodiment of the present invention, since the two-dimensional area selected by the frame is a rectangular area, the edge projection lines are four, and accordingly, the incident intersection points are four (i.e., point 1, point 2, point 3, and point 4 in fig. 6), and the exit intersection points are four (i.e., point 5, point 6, point 7, and point 8 in fig. 6).
It will be appreciated that if the framed two-dimensional area is of other shapes (e.g., triangular, circular, elliptical, etc.), the number of entrance and exit intersections will vary accordingly. For example: in one embodiment, the two-dimensional area is a triangular area and the edge projection lines are three, such that the number of entrance and exit intersections is three. Another example is: in another embodiment, the two-dimensional region is a circle, the projection lines emitted from any point on the circumference of the circle can be used as edge projection lines, the specific number of the edge projection lines can be selected according to requirements, and the number of the incident intersection points and the exit intersection points is equal to the number of the edge projection lines.
In addition, as shown in fig. 7, in an embodiment, each pre-selected point corresponds to one candidate point region, and all the candidate points in the candidate point region emit projection lines according to preset projection angles and directions, so that an entry intersection point candidate region and an exit intersection point candidate region corresponding to the detected article can be acquired in the three-dimensional region. Calculating the coordinate mean value of the alternative area of the incidence intersection point by using a mean value method, wherein the coordinate mean value can be used as the position coordinate of the incidence intersection point in the three-dimensional area; similarly, the coordinate mean value of the candidate region of the exit intersection point is calculated by a mean value method, and can be used as the position coordinate of the exit intersection point in the three-dimensional region.
In another embodiment, the averaging method may be replaced by a weighted averaging method or a limit value method for determining the position coordinates of the incident intersection point and the exit intersection point in the three-dimensional region.
In step S4, a frame marker is drawn based on the plurality of intersections to indicate the area where the target object is located.
Specifically, the method comprises the steps of S41-S43:
s41: and connecting an incident intersection point and an emergent intersection point of the projection line with the same edge and the detected article.
As shown in fig. 8, the incident intersection 1 and the exit intersection 5 at which the same edge projection line intersects with the object to be inspected are connected, and similarly, the incident intersection 2 and the exit intersection 6 are connected, the incident intersection 3 and the exit intersection 7 are connected, and the incident intersection 4 and the exit intersection 8 are connected.
S42: and connecting the two adjacent edge projection lines with the incident intersection points of the intersected detected articles respectively.
As shown in fig. 8, the incidence intersection 1 and the incidence intersection 2, at which two adjacent edge projection lines intersect with the object to be inspected, are connected, and similarly, the incidence intersection 2 and the incidence intersection 3 are connected, the incidence intersection 3 and the incidence intersection 4 are connected, and the incidence intersection 4 and the incidence intersection 1 are connected.
S43: and respectively connecting the two adjacent edge projection lines with the emergent intersection points of the intersected articles.
As shown in fig. 8, the exit intersection 5 and the exit intersection 6 at which two adjacent edge projection lines intersect with the object to be inspected are connected, and similarly, the exit intersection 6 and the exit intersection 7 are connected, the exit intersection 7 and the exit intersection 8 are connected, and the exit intersection 8 and the exit intersection 5 are connected. Thus, a frame-shaped object (which is a quadrangular frustum) is drawn to indicate the area of the target object (i.e., the hazardous article) within the inspected article.
And step S5, displaying corresponding category prompts on the line frame mark body according to the categories of the dangerous goods.
Specifically, after the line frame mark body is drawn according to the step S4, according to the dangerous goods category (i.e., the target object is a gun, a knife, etc.) pre-determined by the security inspector, a corresponding category prompt (such as the icon 101 shown in fig. 9) is displayed on the line frame mark body. The icon 101 exists in a three-dimensional space, is rotatable with the object to be inspected, and is always attached to the frame body.
It can be understood that, in the embodiment of the present invention, the security inspector can determine the line frame annotation body only by one frame selection on the screen, and perform corresponding category prompt on the line frame annotation body according to the type of the dangerous goods determined by the security inspector. After the frame selection is finished, if the security inspector finds that the frame marking body determined by the frame selection does not comprise the target object, the frame selection is directly cancelled, and the two-dimensional area is selected on the screen again.
As shown in fig. 10, on the basis of the above target object labeling method, the embodiment of the present invention further provides a target object labeling system for a three-dimensional CT image, which includes a processor 11 and a memory 12, and may further include a communication component, a sensor component, a power supply component, a multimedia component, and an input/output interface according to actual needs. The memory, the communication module, the sensor module, the power module, the multimedia module, and the input/output interface are connected to the processor 11. As mentioned above, the memory 12 may be Static Random Access Memory (SRAM), Electrically Erasable Programmable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), magnetic memory, flash memory, etc.; the processor 11 may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a Digital Signal Processing (DSP) chip, or the like. Other communication components, sensor components, power components, multimedia components, etc. may be implemented using common components found in existing smartphones and are not specifically described herein.
In addition, the target object labeling system of the three-dimensional CT image provided by the embodiment of the present invention includes a processor 11 and a memory 12, wherein the processor 11 reads the computer program or the instructions in the memory 12 for executing the following operations:
s1, obtaining a two-dimensional area selected by a security inspector under one visual angle of the three-dimensional CT image and the dangerous goods category of a target object according to the three-dimensional CT image drawn by the three-dimensional CT fault data of the detected goods;
step S2, acquiring a three-dimensional area corresponding to the two-dimensional area according to a perspective projection principle based on a preset projection angle and direction;
step S3, acquiring a plurality of intersection points of the three-dimensional area and the detected article;
step S4, drawing a frame mark body according to the plurality of intersection points for showing the area of the target object;
and step S5, displaying corresponding category prompts on the line frame mark body according to the categories of the dangerous goods.
Compared with the prior art, the method and the system for marking the target object of the three-dimensional CT image provided by the invention have the advantages that the three-dimensional area where the target object is located is determined by utilizing the process of refreshing and redrawing the three-dimensional CT image by using the ray projection algorithm, so that the method can fully utilize the parallel capability of graphic hardware, and the target object marking processing can be completed with the minimum algorithm complexity and time overhead without preprocessing, early identification and depth test. In addition, compared with the traditional method, the time efficiency is greatly improved, the dangerous goods can be marked once through single operation, the identification result is not depended on, the marking success rate is improved to one hundred percent, and the whole marking process is more friendly.
The method and system for marking a target object in a three-dimensional CT image according to the present invention are described in detail above. It will be apparent to those skilled in the art that any obvious modifications thereof can be made without departing from the spirit of the invention, which infringes the patent right of the invention and bears the corresponding legal responsibility.

Claims (10)

1. A method for marking a target object in a three-dimensional CT image is characterized by comprising the following steps:
s1, obtaining a two-dimensional area selected by a security inspector under one visual angle of the three-dimensional CT image and the dangerous goods category of a target object according to the three-dimensional CT image drawn by the three-dimensional CT fault data of the detected goods;
step S2, acquiring a three-dimensional area corresponding to the two-dimensional area according to a perspective projection principle based on a preset projection angle and direction;
step S3, acquiring a plurality of intersection points of the three-dimensional area and the detected article;
step S4, drawing a frame mark body according to the plurality of intersection points for representing the area of the target object;
and step S5, displaying a corresponding category prompt on the frame mark body according to the category of the dangerous goods.
2. The object labeling method according to claim 1, wherein the step S1 includes the substeps of:
s11, obtaining three-dimensional CT fault data of the detected article at a certain moment, and drawing a corresponding three-dimensional CT image by adopting a ray projection algorithm;
and step S12, receiving a two-dimensional area framed and selected by a security inspector from a screen under one visual angle of the three-dimensional CT image, and acquiring the dangerous goods category of the target object judged by the security inspector.
3. The target labeling method of claim 2, wherein: step S2 includes the following sub-steps:
s21, classifying the three-dimensional CT fault data according to the coordinate conversion result when the next refreshing redrawing operation is carried out on the three-dimensional CT image aiming at the two-dimensional area framed and selected by the security inspector from the screen;
and step S22, drawing each point falling in the two-dimensional area as an undetermined point, wherein all the undetermined points form the three-dimensional area.
4. The target labeling method of claim 2, wherein: the step S11 further includes the steps of:
taking a preselected point on a screen as a circle center, and taking a set length as a radius to form a circular area as an alternative point area of the preselected point;
aiming at the candidate point area of the preselected point, when a corresponding three-dimensional CT image is drawn by adopting a ray casting algorithm, all points falling in the candidate point area after coordinate transformation are taken as candidate points;
calculating a coordinate mean value based on the position coordinates of all the alternative points in the alternative point area on the screen;
performing position compensation on the position coordinates of the preselected point on the screen by using the coordinate mean value; wherein, the two-dimensional area is formed by a plurality of preselection points in a surrounding manner.
5. The object labeling method according to claim 4, wherein the step S3 includes the substeps of:
step S31, taking the projection line emitted from the preselected point according to a preset projection angle and direction as the edge projection line of the three-dimensional area;
step S32, acquiring an incident intersection point intersected with the detected article when the edge projection line enters the detected article;
step S33, acquiring an emergent intersection point intersected with the detected article when the edge projection line leaves the detected article;
and repeating the steps S31 to S33 until the incident intersection point and the emergent intersection point of all the edge projection lines and the detected article are acquired.
6. The object labeling method according to claim 5, wherein said step S4 includes the substeps of:
connecting an incident intersection point and an emergent intersection point of the same edge projection line and the detected article;
connecting two adjacent edge projection lines with incident intersection points of the intersected detected articles respectively;
and connecting the two adjacent edge projection lines with the intersected emergent intersection points of the detected object respectively to draw the frame marking body.
7. The target labeling method of claim 5, wherein:
aiming at the candidate point region of the preselected point, acquiring an incident intersection point candidate region and an emergent intersection point candidate region corresponding to the detected article in the three-dimensional region;
determining the position coordinates of the incidence intersection point in the three-dimensional area based on the coordinate mean value of the incidence intersection point standby area;
and determining the position coordinates of the emergent intersection point in the three-dimensional area based on the coordinate mean value of the emergent intersection point candidate area.
8. The target labeling method of claim 7, wherein:
the coordinate mean value based on the incident intersection point candidate area is replaced by: based on a weighted mean or limit of the incident intersection candidate region;
and replacing the coordinate mean value based on the emergent intersection point candidate area by: based on a weighted mean or limit value of the exit intersection candidate region.
9. The target labeling method of claim 1, wherein: the step S2 is replaced by:
and acquiring a three-dimensional region corresponding to the two-dimensional region according to an orthographic projection principle based on a preset projection angle and a preset projection direction.
10. An object labeling system for three-dimensional CT images, comprising a processor and a memory, the processor reading a computer program or instructions in the memory for performing the following operations:
s1, obtaining a two-dimensional area selected by a security inspector under one visual angle of the three-dimensional CT image and the dangerous goods category of a target object according to the three-dimensional CT image drawn by the three-dimensional CT fault data of the detected goods;
step S2, acquiring a three-dimensional area corresponding to the two-dimensional area according to a perspective projection principle based on a preset projection angle and direction;
step S3, acquiring a plurality of intersection points of the three-dimensional area and the detected article;
step S4, drawing a frame mark body according to the plurality of intersection points for representing the area of the target object;
and step S5, displaying a corresponding category prompt on the frame mark body according to the category of the dangerous goods.
CN202210406748.6A 2022-04-18 2022-04-18 Target object marking method and system of three-dimensional CT image Pending CN114612467A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210406748.6A CN114612467A (en) 2022-04-18 2022-04-18 Target object marking method and system of three-dimensional CT image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210406748.6A CN114612467A (en) 2022-04-18 2022-04-18 Target object marking method and system of three-dimensional CT image

Publications (1)

Publication Number Publication Date
CN114612467A true CN114612467A (en) 2022-06-10

Family

ID=81868857

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210406748.6A Pending CN114612467A (en) 2022-04-18 2022-04-18 Target object marking method and system of three-dimensional CT image

Country Status (1)

Country Link
CN (1) CN114612467A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116433476A (en) * 2023-06-09 2023-07-14 有方(合肥)医疗科技有限公司 CT image processing method and device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116433476A (en) * 2023-06-09 2023-07-14 有方(合肥)医疗科技有限公司 CT image processing method and device
CN116433476B (en) * 2023-06-09 2023-09-08 有方(合肥)医疗科技有限公司 CT image processing method and device

Similar Documents

Publication Publication Date Title
US20090231327A1 (en) Method for visualization of point cloud data
CN105223212B (en) Safety check CT system and its method
Heinzel et al. Exploring full-waveform LiDAR parameters for tree species classification
Wang et al. Edge extraction by merging 3D point cloud and 2D image data
CN101943761B (en) Detection method of X-ray
CN107016373A (en) The detection method and device that a kind of safety cap is worn
EP3772722B1 (en) X-ray image processing system and method, and program therefor
US5424823A (en) System for identifying flat orthogonal objects using reflected energy signals
CN105678737B (en) A kind of digital picture angular-point detection method based on Radon transformation
CN106932414A (en) Inspection and quarantine inspection system and its method
CN109767431A (en) Accessory appearance defect inspection method, device, equipment and readable storage medium storing program for executing
CN114612467A (en) Target object marking method and system of three-dimensional CT image
CN112288888A (en) Method and device for labeling target object in three-dimensional CT image
CN104658034A (en) Fusion rendering method for CT (Computed Tomography) image data
Pont et al. Calibrated tree counting on remotely sensed images of planted forests
Korpela et al. The performance of a local maxima method for detecting individual tree tops in aerial photographs
CN112598682B (en) Three-dimensional CT image sectioning method and device based on any angle
CN111192246A (en) Automatic detection method of welding spot
US11972593B2 (en) System and methods for quantifying uncertainty of segmentation masks produced by machine learning models
US10782441B2 (en) Multiple three-dimensional (3-D) inspection renderings
US11605173B2 (en) Three-dimensional point cloud labeling using distance field data
JP6829778B2 (en) Object identification device and object identification method
CN103593667A (en) Rapid image foreign matter identification method based on set connectivity principle
Czúni et al. Color based clustering for trunk segmentation
US20150015576A1 (en) Object recognition and visualization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination