CN115719358A - Method for extracting straight line segment in X-ray security inspection image - Google Patents

Method for extracting straight line segment in X-ray security inspection image Download PDF

Info

Publication number
CN115719358A
CN115719358A CN202211476286.1A CN202211476286A CN115719358A CN 115719358 A CN115719358 A CN 115719358A CN 202211476286 A CN202211476286 A CN 202211476286A CN 115719358 A CN115719358 A CN 115719358A
Authority
CN
China
Prior art keywords
edge
point
image
current
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211476286.1A
Other languages
Chinese (zh)
Inventor
毛林
任世龙
孔维武
李宏伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
First Research Institute of Ministry of Public Security
Beijing Zhongdun Anmin Analysis Technology Co Ltd
Original Assignee
First Research Institute of Ministry of Public Security
Beijing Zhongdun Anmin Analysis Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by First Research Institute of Ministry of Public Security, Beijing Zhongdun Anmin Analysis Technology Co Ltd filed Critical First Research Institute of Ministry of Public Security
Priority to CN202211476286.1A priority Critical patent/CN115719358A/en
Publication of CN115719358A publication Critical patent/CN115719358A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a method for extracting straight line segments in an X-ray security check image, which comprises the steps of filtering the X-ray security check image by using an anisotropic filtering method based on the local entropy of the image, and well keeping a weak edge in the image while removing noise; then, a method for thinning the edge of the image by single pixel and a method for extracting a curve are provided, compared with the conventional method, the method can extract the curve with a larger length and reduce the probability of curve fracture; then, a Douglas-Peucker algorithm is used for extracting straight line segments, and intersection points among the straight line segments can be reserved while the straight line segments are extracted; and finally, removing the significant interfering straight line segments such as inorganic matter objects, luggage edges and the like according to the priori knowledge of the X-ray security inspection image. The method can solve the problems that the edge of the sheet-shaped object in the luggage image of the security inspection equipment is not clear and the extraction of the edge straight line segment is discontinuous, efficiently and accurately realizes the detection of the edge straight line segment of the sheet-shaped object, and has important significance for the segmentation and identification of the sheet explosives in the subsequent X-ray security inspection image.

Description

Method for extracting straight line segment in X-ray security inspection image
Technical Field
The invention relates to the technical field of image processing, in particular to a method for extracting straight line segments in an X-ray security inspection image.
Background
With the rapid growth of economy and the rapid development of public transportation in China, the public transportation safety problem is gradually highlighted while people go out more quickly and conveniently. Therefore, the ability of security inspection equipment to accurately detect contraband in luggage is becoming increasingly important to ensure national security, social security, and the safety of people's lives and properties.
The AT security inspection equipment is widely applied to public places with dense personnel, such as airports, railway stations and the like, with the advantages of higher detection capability and more economic equipment cost. For a long time, the detection of thin sheet objects (such as thin sheet explosives) is a functional bottleneck and a technical difficulty of the AT security inspection equipment, and because the thin sheet objects have a small thickness and weak X-ray projection characteristics, and once articles in a trunk are disordered and obviously overlapped and shielded, the detection capability of the thin sheet objects of the AT security inspection equipment is further reduced. The most valuable projection characteristic of the thin sheet object is the straight line segment of the edge of the thin sheet object, which is the basis for subsequent segmentation and further identification of the thin sheet object, however, the projection of the thin sheet object in the X-ray bag image often has the disadvantages of low gray contrast, unclear edge, easy shielding by other objects and the like, and the extraction of relevant characteristics by using a conventional straight line detection algorithm is difficult.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a method for extracting straight-line segments in an X-ray security inspection image.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for extracting straight line segments in an X-ray security inspection image comprises the following steps:
s1, acquiring an original X-ray security inspection image I (X, y);
step S2, performing base on the image I (x, y)Anisotropic filtering on the local entropy of the image to obtain a filtered image I 1 (x,y);
Step S3, for the image I 1 (x, y) carrying out Canny edge detection to obtain an edge image I 2 (x,y);
Step S4, for the image I 2 (x, y) carrying out edge thinning treatment to obtain a single-pixel edge image I 3 (x, y); the specific process is as follows:
s4.1, obtaining an image I 2 Neighborhood distribution of each edge point in (x, y); representing the current edge pixel point to be processed by X, and representing the current edge pixel point to be processed by P 0 -P 7 Eight neighborhood points representing the current pixel point, by P 8 -P 23 Representing eight out-of-neighborhood points;
s4.2, when no edge point exists in the eight neighborhood points of the point X, eliminating the current edge pixel point X;
when 1 edge point P exists in the eight neighborhood points of the point X, judging the eight neighborhood points of the edge point P. If the eight neighborhood points of the edge point P only have 1 edge point except the multiplied point, keeping the current edge pixel point multiplied; if eight neighborhood points of the edge point P have 2 edge points except the multiplied point, if the 2 edge points are adjacent, and the direction formed by the 1 point and the edge point P is the same as the direction formed by the multiplied point and the edge point P, the current edge pixel point x is reserved, and the current edge pixel point x is eliminated under other conditions;
when 2 edge points exist in the eight neighborhood points of the point X, if the 2 edge points are not communicated, eliminating the current edge pixel point X; if these 2 edge points are connected:
1) When the 2 edge points are in a diagonal relationship, if any edge point in the 2 edge points has other edge points besides the x point in the eight neighborhood points, eliminating the x of the current edge point;
2) When the 2 edge points are in a four-connected relation, if the 2 edge points all have edge points in eight neighborhood points and are public neighborhood edge points of the 2 edge points, eliminating the current edge pixel point x, and keeping the current edge pixel point x under the other conditions.
When there are 3 edge points in the eight neighborhood points of point x, if the 3 edge points are connected and there is at least one four-connection, eliminating the current edge pixel point x, otherwise, keeping the current edge pixel point x;
when more than 4 edge points exist in the eight neighborhood points of the point X, if the edge points have a communication relation, eliminating the current edge pixel point X, and otherwise, keeping the current edge pixel point X;
finally obtaining a single-pixel edge image I 3 (x,y);
Step S5, follow the edge tracking method from the image I 3 Extracting curves from (x, y) to obtain a curve set C; the specific process is as follows:
s5.1, establishing a and image I 3 (x, y) the same size flag matrix is used to indicate the pixel is used, and the flag matrix initializes the pixel to 0;
s5.2, traversing the image I 3 (x, y), if only 1 edge point exists in the eight neighborhood points with the edge point, the edge point is taken as a curve starting point, and the tracking is started by taking the starting point as a current point;
s5.3, setting the current point value as 1 in the marking matrix, and traversing eight neighborhood points of the current point, wherein the number of unmarked edge points in the eight neighborhood points is N;
s5.4, if N =0, ending the current curve tracking, returning to the step S5.2, and continuously traversing the image I 3 (x, y) searching a new curve starting point;
if N =1, the next edge point is the unmarked edge point, the link values from the current point to the next edge point are recorded in the curve link code table of the current curve, the next edge point is set as the current point, and the process returns to step S5.3; the chain code value is a direction value of eight neighborhoods, the value is 0-7, and the anticlockwise direction is increased;
if N is more than or equal to 2, firstly determining the main direction of the curve, if the number of the current curve pixel points is less than 10, defining the main direction of the curve as the mode of the chain code values in the current curve chain code table, otherwise defining the main direction of the curve as the mode of the last eight chain code values in the current curve chain code table; if the current point has an edge point in the main direction of the curve, taking the edge point as a next edge point, recording the link values from the current point to the next edge point in a curve link code table of the current curve, setting the next edge point as the current point, and returning to the step S5.3; if the current point has no edge point in the main direction of the curve, judging whether the edge point exists at the position where the chain code value in the main direction of the curve is subtracted by one, if so, recording the chain code value from the current point to the next edge point in a curve chain code table of the current curve, setting the next edge point as the current point, and returning to the step S5.3; if the position of the chain code value minus one in the main direction of the curve does not have an edge point, judging whether the position of the chain code value plus one in the main direction has an edge point, if so, taking the next edge point as the edge point, recording the chain code value from the current point to the next edge point in the curve chain code table of the current curve, setting the next edge point as the current point, returning to the step S5.3, otherwise, finishing the tracking of the current curve, and returning to the step S5.2;
finally obtaining a curve set C;
s6, compressing each curve in the curve set C according to a Douglas-Peucker algorithm to obtain end points of all straight line segments;
s7, defining the distance between the end points of the straight line segment as the length of the straight line, if the length of the straight line meets the requirement of a set length threshold, keeping the straight line, otherwise, discarding the straight line;
s8, calculating all pixel points on the straight line segment according to the end points of the straight line, calculating the equation and the angle of each straight line segment, and extracting the straight line segment;
s9, removing the significant interference straight line segments in the image according to the priori knowledge of the X-ray security inspection image;
and S10, outputting the rest straight line segments, and ending the process.
Further, the specific process of step S2 is:
s2.1, utilizing image local entropy formula
Figure BDA0003960041020000051
Calculating the local entropy of each pixel point of the image I (x, y), wherein omega k Is a k × k neighborhood centered on the current pixel point (x, y), L is the image gray level,
Figure BDA0003960041020000052
for gray level l in the neighborhood omega k Probability of occurrence of, n l The number of pixels with the gray level of l;
s2.2, P-M model having a diffusion coefficient function of
Figure BDA0003960041020000053
Wherein
Figure BDA0003960041020000054
The gradient of the image at the pixel point (x, y) is shown, and K is a gradient threshold; defining diffusion coefficients incorporating local entropy
Figure BDA0003960041020000055
H (x, y) represents the local entropy of the image at the pixel point (x, y);
s2.3, discretization in eight directions is adopted, and an iterative formula is as follows:
Figure BDA0003960041020000056
wherein t is the number of iterations, λ is a parameter for controlling smoothing,
Figure BDA0003960041020000057
the difference of eight directions of east, west, south, north, southeast, northeast, southwest and northwest;
thus obtaining an image I after anisotropic filtering processing based on the local entropy of the image 1 (x,y)。
Further, the specific process of step S9 is:
s9.1, establishing a full 0 matrix with the same size as the original X-ray security inspection image as a background edge image;
s9.2, calculating the image I by using a Prewitt operator 1 (x, y) the gradient of each pixel point in the (x, y) is marked with 255 for pixel points with gradient larger than 150 in the background edge image;
s9.3, for image I 1 Traversing four neighborhoods of the upper, lower, left and right of each pixel point in (x, y), if the four neighborhoods haveAnd only one pixel point value is greater than 235, the pixel point is marked with 255 in the background edge image;
s9.4, processing a background edge image by using morphological expansion;
and S9.5, traversing all the straight-line segments extracted in the step S8, and removing the straight-line segment if more than half of the pixels on the straight-line segment are located at the marking position in the background edge image.
The invention has the beneficial effects that: according to the method, firstly, an anisotropic filtering method based on the local entropy of the image is utilized to filter the X-ray security inspection image, so that the weak edge in the image can be well kept while the noise is removed; then, a method for thinning the edge of the image by single pixel and a method for extracting a curve are provided, compared with the conventional method, the method can extract the curve with a larger length and reduce the probability of curve fracture; then, a Douglas-Peucker algorithm is used for extracting straight line segments, and intersection points among the straight line segments can be reserved while the straight line segments are extracted; and finally, removing the significant interference straight-line segments such as inorganic matter objects, luggage edges and the like according to the priori knowledge of the X-ray security inspection image. The method can solve the problems that the edge of the sheet-shaped object in the luggage image of the security inspection equipment is not clear and the extraction of the edge straight line segment is discontinuous, efficiently and accurately realizes the detection of the edge straight line segment of the sheet-shaped object, and has important significance for the segmentation and identification of the sheet explosives in the subsequent X-ray security inspection image.
Drawings
FIG. 1 is a schematic flow chart of a method in example 1 of the present invention;
FIG. 2 is a schematic diagram of a neighborhood template in embodiment 1 of the present invention;
FIG. 3 is an original X-ray security inspection image acquired in embodiment 2 of the present invention;
FIG. 4 is an image after anisotropic filtering in example 2 of the present invention;
fig. 5 is a single-pixel edge image processed in step S4 in embodiment 2 of the present invention;
FIG. 6 is a schematic diagram showing the result of extracting straight-line segments in step S8 in example 2 of the present invention;
fig. 7 is a schematic diagram of the remaining straight-line segments output in step S10 in embodiment 2 of the present invention.
Detailed Description
The present invention will be further described with reference to the accompanying drawings, and it should be noted that the present embodiment is based on the technical solution, and the detailed implementation and the specific operation process are provided, but the protection scope of the present invention is not limited to the present embodiment.
Example 1
The embodiment provides a method for extracting straight line segments in an X-ray security inspection image, which generally comprises the steps of anisotropic filtering, image Canny edge extraction, image edge single-pixel processing, curve extraction, douglas-Peucker algorithm-based straight line segment extraction, straight line segment length judgment and prior knowledge-based removal of significantly interfered straight line segments.
Specifically, as shown in fig. 1, the method for extracting a straight line segment in an X-ray security inspection image includes the following steps:
s1, acquiring an original X-ray security inspection image I (X, y);
s2, anisotropic filtering based on image local entropy is carried out on the image I (x, y), and the filtered image I is obtained 1 (x, y); the specific process is as follows:
s2.1, utilizing image local entropy formula
Figure BDA0003960041020000081
Calculating the local entropy of each pixel point of the image I (x, y), wherein omega k Is a k × k neighborhood centered on the current pixel point (x, y), L is the image gray level,
Figure BDA0003960041020000082
in the neighborhood Ω for the gray level l k Probability of occurrence of, n l The number of pixels with the gray level of l;
s2.2, P-M model having a diffusion coefficient function of
Figure BDA0003960041020000083
Wherein
Figure BDA0003960041020000084
The gradient of the image at the pixel point (x, y) is shown, and K is a gradient threshold; defining diffusion coefficients incorporating local entropy
Figure BDA0003960041020000085
H (x, y) represents the local entropy of the image at pixel point (x, y).
S2.3, discretization in eight directions is adopted, and an iterative formula is as follows:
Figure BDA0003960041020000086
wherein t is the iteration number, lambda is the parameter for controlling the smoothness,
Figure BDA0003960041020000087
the difference is the difference of the east, west, south, north, southeast, northeast, southwest and northwest.
Thus obtaining an image I after anisotropic filtering processing based on the local entropy of the image 1 (x,y)。
Step S3, for the image I 1 (x, y) carrying out Canny edge detection to obtain an edge image I 2 (x,y);
Step S4, for the image I 2 (x, y) carrying out edge thinning treatment to obtain a single-pixel edge image I 3 (x, y); the specific process is as follows:
s4.1, obtaining an image I 2 Neighborhood distribution of each edge point in (x, y); the neighborhood template is shown in FIG. 2, where X represents the current edge pixel point to be processed, and P represents the current edge pixel point to be processed 0 -P 7 Eight neighborhood points representing the current pixel point, denoted P 8 -P 23 Representing eight out-of-neighborhood points.
S4.2, when no edge point exists in the eight neighborhood points of the point X, eliminating the current edge pixel point X;
and when 1 edge point P exists in the eight neighborhood points of the point X, judging the eight neighborhood points of the edge point P. If the eight neighborhood points of the edge point P only have 1 edge point except the multiplied point, keeping the current edge pixel point multiplied; if eight neighborhood points of the edge point P have 2 edge points except the multiplied point, if the 2 edge points are adjacent, and the direction formed by the 1 point and the edge point P is the same as the direction formed by the multiplied point and the edge point P, the current edge pixel point x is reserved, and the current edge pixel point x is eliminated under other conditions;
when 2 edge points exist in the eight neighborhood points of the point X, if the 2 edge points are not communicated, eliminating the current edge pixel point X; if these 2 edge points are connected:
1) When the 2 edge points are in a diagonal relationship, e.g. P of FIG. 2 4 And P 6 If any edge point in the 2 edge points has other edge points besides the x point in the eight neighborhood points, eliminating the current edge pixel point x;
2) When the 2 edge points are in a four-connected relationship, such as P of FIG. 2 5 And P 6 If all the 2 edge points have edge points in the eight neighborhood points and are the common neighborhood edge points of the 2 edge points, as shown by P in FIG. 2 19 Or P 20 Or P 19 And P 20 And eliminating the current edge pixel point x, and keeping the current edge pixel point x under the other conditions.
When there are 3 edge points in the eight neighborhood points of point x, if the 3 edge points are connected and there is at least one four-connection, eliminating the current edge pixel point x, otherwise, keeping the current edge pixel point x;
when more than 4 edge points exist in the eight neighborhood points of the point X, if the edge points have a communication relation, eliminating the current edge pixel point X, and otherwise, keeping the current edge pixel point X;
finally obtaining a single-pixel edge image I 3 (x,y);
Step S5, follow the edge tracking method from the image I 3 Extracting curves from (x, y) to obtain a curve set C; the specific process is as follows:
s5.1, establishing a and image I 3 (x, y) the same size flag matrix is used to indicate the pixel is used, and the flag matrix initializes the pixel to 0;
s5.2, traversing the image I 3 (x, y) if there are only 1 edge point in the eight neighborhood points of the edge point, then the edge point is the curve startThe method comprises the following steps of point tracking is started by taking a starting point as a current point, a curve chain code table is established for a current curve and is expressed by a vector;
s5.3, setting the current point value as 1 in the marking matrix, and traversing eight neighborhood points of the current point, wherein the number of unmarked edge points in the eight neighborhood points is N;
s5.4, if N =0, ending the current curve tracking, returning to the step S5.2, and continuously traversing the image I 3 (x, y) searching a new curve starting point;
if N =1, the next edge point is the unmarked edge point, the link code values from the current point to the next edge point are recorded in the curve link code table of the current curve, the next edge point is set as the current point, and the process returns to step S5.3; the chain code value is a direction value of eight neighborhoods, the value is 0-7, and the anticlockwise direction is increased;
if N is more than or equal to 2, firstly determining the main direction of the curve, if the number of the current curve pixel points is less than 10, defining the main direction of the curve as the mode of the chain code values in the current curve chain code table, otherwise defining the main direction of the curve as the mode of the last eight chain code values in the current curve chain code table; if the current point has an edge point in the main direction of the curve, taking the edge point as a next edge point, recording the link code value from the current point to the next edge point in the curve link code table of the current curve, setting the next edge point as the current point, and returning to the step S5.3; if the current point has no edge point in the main direction of the curve, judging whether the edge point exists at the position where the chain code value in the main direction of the curve is subtracted by one, if so, recording the chain code value from the current point to the next edge point in a curve chain code table of the current curve, setting the next edge point as the current point, and returning to the step S5.3; if the position of the chain code value minus one in the main direction of the curve does not have an edge point, judging whether the position of the chain code value plus one in the main direction has an edge point, if so, taking the next edge point as the edge point, recording the chain code value from the current point to the next edge point in the curve chain code table of the current curve, setting the next edge point as the current point, returning to the step S5.3, otherwise, finishing the tracking of the current curve, and returning to the step S5.2;
finally obtaining a curve set C;
and S6, compressing each curve in the curve set C according to a Douglas-Peucker algorithm to obtain end points of all straight line segments. The Douglas-Peucker algorithm is well known in the art and the implementation process is not described in detail here.
S7, defining the distance between the end points of the straight line segment as the length of the straight line, if the length of the straight line meets the requirement of a set length threshold, keeping the straight line, otherwise, discarding the straight line;
s8, calculating all pixel points on the straight line segment according to the straight line end points, calculating an equation and an angle of each straight line segment, and extracting the straight line segment;
s9, removing significant interfering straight-line segments such as inorganic matters, tray edges, luggage edges and the like in the image according to the priori knowledge of the X-ray security inspection image; the specific process is as follows:
s9.1, establishing a full 0 matrix with the same size as the original X-ray security inspection image as a background edge image;
s9.2, calculating the image I by using a Prewitt operator 1 (x, y) the gradient of each pixel point in the (x, y) is marked with 255 for pixel points with gradient larger than 150 in the background edge image;
s9.3, for image I 1 (x, y) traversing four neighborhood regions of each pixel point in the (x, y), and if one pixel point value in the four neighborhood regions is more than 235, marking the pixel point by 255 in the background edge image;
s9.4, processing a background edge image by using morphological expansion;
and S9.5, traversing all the straight-line segments extracted in the step S8, and removing the straight-line segment if more than half of the pixels on the straight-line segment are located at the marking position in the background edge image.
And S10, outputting the remaining straight line segments, and ending the process.
Example 2
This example provides an application example of the method described in example 1.
S1, acquiring an X-ray security inspection image I (X, y) which is a certain typical example image containing a sheet explosive through AT security inspection equipment, wherein the image is shown in figure 3;
step S2, anisotropic filtering based on image local entropy is carried out on the I (x, y) obtained in the step S1:
s2.1, utilizing image local entropy formula
Figure BDA0003960041020000121
Calculating the local entropy of each pixel point of the image I (x, y), wherein omega k The k × k neighborhood centered on the current pixel point, in this embodiment, the neighborhood size is 7 × 7, l is the image gray level,
Figure BDA0003960041020000131
in the neighborhood Ω for the gray level l k Probability of occurrence of, n l The number of pixels with the gray level of l;
s2.2, P-M model having a diffusion coefficient function of
Figure BDA0003960041020000132
Wherein
Figure BDA0003960041020000133
The gradient of the image at (x, y) is K, which is the gradient threshold, and K =5 in this embodiment.
Defining diffusion coefficients incorporating local entropy
Figure BDA0003960041020000134
H (x, y) represents the local entropy of the image at the pixel point (x, y);
s2.3, discretization in eight directions is adopted, and an iterative formula is as follows:
Figure BDA0003960041020000135
wherein t is the number of iterations, λ is a parameter for controlling smoothing,
Figure BDA0003960041020000136
the difference between the east, west, south, north, southeast, northeast, southwest and northwest directions is the sum of the iteration times in the embodimentThe smoothing parameters were controlled to be 8 and 0.25, respectively.
Thus, an image I subjected to anisotropic filtering processing based on the local entropy of the image is obtained 1 (x, y) as shown in FIG. 4.
Step S3, for the image I 1 (x, y) carrying out Canny edge detection to obtain an edge image I 2 (x,y);
Step S4, for the image I 2 (x, y) carrying out edge thinning treatment to obtain a single-pixel edge image I 3 (x,y):
S4.1, obtaining an image I 2 The neighborhood distribution of each edge point in (x, y) and the neighborhood template is shown in FIG. 2;
s4.2, no edge point exists in the eight neighborhood points of the current point x, and the current edge pixel point x is eliminated;
and when 1 edge point P exists in the eight neighborhood points of the point X, judging the eight neighborhood points of the edge point P. If the eight neighborhood points of the edge point P only have 1 edge point except the multiplied point, keeping the current edge pixel point multiplied; if eight neighborhood points of the edge point P have 2 edge points except the multiplied point, if the 2 edge points are adjacent, and the direction formed by the 1 point and the edge point P is the same as the direction formed by the multiplied point and the edge point P, the current edge pixel point x is reserved, and the current edge pixel point x is eliminated under other conditions;
when 2 edge points exist in the eight neighborhood points of the point X, if the 2 edge points are not communicated, eliminating the current edge pixel point X; if these 2 edge points are connected:
1) When the 2 edge points are in a diagonal relationship, e.g. P of FIG. 2 4 And P 6 If any edge point in the 2 edge points has other edge points besides the x point in the eight neighborhood points, eliminating the current edge pixel point x;
2) When the 2 edge points are in a four-connected relationship, such as P of FIG. 2 5 And P 6 If all the 2 edge points have edge points in the eight neighborhood points and are the common neighborhood edge points of the 2 edge points, as shown by P in FIG. 2 19 Or P 20 Or P 19 And P 20 Eliminating the x of the current edge pixel point, and keeping the current edge pixel point under the other conditions×。
When there are 3 edge points in the eight neighborhood points of point x, if the 3 edge points are connected and there is at least one four-connection, eliminating the current edge pixel point x, otherwise, keeping the current edge pixel point x;
when more than 4 edge points exist in the eight neighborhood points of the point X, if the connection relation exists among the edge points, eliminating the current edge pixel point X, otherwise, keeping the current edge pixel point X;
edge map I 2 (x, y) obtaining a single-pixel edge image I after the processing of the step S4 3 (x, y) as shown in FIG. 5.
Step S5, follow some edge tracking method from image I 3 Extracting a curve from (x, y) to obtain a curve set C, and specifically comprising the following steps:
s5.1, establishing a and image I 3 (x, y) the same size flag matrix is used to indicate the pixel used state, and the flag matrix initializes the pixel to 0;
s5.2, traversing the image I 3 (x, y), if only 1 edge point exists in the eight neighborhoods of edge points, the edge point is taken as a curve starting point, and the tracking is started by taking the starting point as the current point;
s5.3, setting the current point value as 1 in the marking matrix, traversing eight neighborhoods of the current point, and setting the number of unmarked edge points in the eight neighborhoods as N;
s5.4, if N =0, ending the current curve tracking, returning to the step S5.2, and continuously traversing the image I 3 (x, y) searching a new curve starting point;
if N =1, the next edge point is the unmarked edge point, the link code values from the current point to the next edge point are recorded in the curve link code table of the current curve, the next edge point is set as the current point, and the process returns to step S5.3; the chain code value is a direction value of eight neighborhoods, the value is 0-7, and the anticlockwise direction is increased;
if N is more than or equal to 2, firstly determining the curve principal direction, if the number of the current curve pixel points is less than 10, defining the curve principal direction as the mode of the chain code values in the current curve chain code table, otherwise defining the curve principal direction as the mode of the last eight chain code values in the current curve chain code table; if the current point has an edge point in the main direction of the curve, taking the edge point as a next edge point, recording the link code value from the current point to the next edge point in the curve link code table of the current curve, setting the next edge point as the current point, and returning to the step S5.3; if the current point has no edge point in the main direction of the curve, judging whether the edge point exists at the position where the chain code value in the main direction of the curve is subtracted by one, if so, recording the chain code value from the current point to the next edge point in a curve chain code table of the current curve, setting the next edge point as the current point, and returning to the step S5.3; if the position of the chain code value minus one in the main direction of the curve does not have an edge point, judging whether the position of the chain code value plus one in the main direction has an edge point, if so, taking the next edge point as the edge point, recording the chain code value from the current point to the next edge point in the curve chain code table of the current curve, setting the next edge point as the current point, returning to the step S5.3, otherwise, finishing the tracking of the current curve, and returning to the step S5.2;
finally obtaining a curve set C;
s6, compressing each curve in the curve set C according to a Douglas-Peucker algorithm to obtain end points of all straight lines;
and S7, calculating the length of the straight line according to the end points of the straight line, if the length of the straight line meets the requirement of the set length threshold, keeping the straight line, and otherwise, discarding the straight line. An alternative to the linear length threshold in the present invention is 15;
s8, obtaining all pixel points on the straight line segment according to the straight line end points, calculating the angle of the equation of each straight line segment, and extracting the straight line segment, as shown in FIG. 6;
s9, removing the significant interference straight-line segments such as inorganic matters, tray edges, luggage edges and the like in the image according to the priori knowledge of the X-ray security inspection image, wherein the specific implementation method comprises the following steps:
s9.1, establishing a full 0 matrix with the same size as the original image as a background edge image;
s9.2, calculating the image I by using a Prewitt operator 1 The gradient of each pixel point in (x, y), and the pixel points with the gradient larger than 150 are marked by 255 in the background edge image;
s9.3, for image I 1 Traversing four upper, lower, left and right neighborhoods of each pixel point in the (x, y), and if one pixel point value in the four neighborhoods is more than 235, marking the pixel point by 255 in the background edge image;
s9.4, processing a background edge image by using morphological expansion;
and S9.5, traversing all pixel points on each straight-line segment for all the straight-line segments extracted in the step S8, and removing the straight-line segment if half or more of the pixel points on the straight-line segment are positioned at the mark positions in the background edge image.
And step S10, outputting the rest straight line segments, and ending the algorithm as shown in FIG. 7.
Various corresponding changes and modifications can be made by those skilled in the art based on the above technical solutions and concepts, and all such changes and modifications should be included in the protection scope of the present invention.

Claims (3)

1. A method for extracting straight line segments in an X-ray security inspection image is characterized by comprising the following steps:
s1, acquiring an original X-ray security inspection image I (X, y);
s2, anisotropic filtering based on image local entropy is carried out on the image I (x, y), and the filtered image I is obtained 1 (x,y);
Step S3, for the image I 1 (x, y) carrying out Canny edge detection to obtain an edge image I 2 (x,y);
Step S4, for the image I 2 (x, y) carrying out edge thinning treatment to obtain a single-pixel edge image I 3 (x, y); the specific process is as follows:
s4.1, obtaining an image I 2 Neighborhood distribution of each edge point in (x, y); representing the current edge pixel point to be processed by X, and representing the current edge pixel point to be processed by P 0 -P 7 Eight neighborhood points representing the current pixel point, by P 8 -P 23 Representing eight out-of-neighborhood points;
s4.2, no edge point exists in the eight neighborhood points of the current point x, and the current edge pixel point x is eliminated;
when 1 edge point P exists in the eight neighborhood points of the point X, judging the eight neighborhood points of the edge point P; if the eight neighborhood points of the edge point P only have 1 edge point except the multiplied point, keeping the current edge pixel point multiplied; if eight neighborhood points of the edge point P have 2 edge points except the multiplied point, if the 2 edge points are adjacent, and the direction formed by the 1 point and the edge point P is the same as the direction formed by the multiplied point and the edge point P, the current edge pixel point x is reserved, and the current edge pixel point x is eliminated under other conditions;
when 2 edge points exist in the eight neighborhood points of the point X, if the 2 edge points are not communicated, eliminating the current edge pixel point X; if these 2 edge points are connected:
1) When the 2 edge points are in a diagonal relationship, if any edge point in the 2 edge points has other edge points besides the x point in the eight neighborhood points, eliminating the x of the current edge point;
2) When the 2 edge points are in a four-connection relation, if the 2 edge points all have edge points in eight neighborhood points and are public neighborhood edge points of the 2 edge points, eliminating the current edge pixel point x, and keeping the current edge pixel point x under the other conditions;
when there are 3 edge points in the eight neighborhood points of point x, if these 3 edge points are connected, and there is at least one four that is connected, eliminate the present edge pixel point x, otherwise keep the present edge pixel point x;
when more than 4 edge points exist in the eight neighborhood points of the point X, if the edge points have a communication relation, eliminating the current edge pixel point X, and otherwise, keeping the current edge pixel point X;
finally obtaining a single-pixel edge image I 3 (x,y);
Step S5, follow the edge tracking method from the image I 3 Extracting curves from (x, y) to obtain a curve set C; the specific process is as follows:
s5.1, establishing a and image I 3 (x, y) the same size flag matrix is used to indicate the pixel is used, and the flag matrix initializes the pixel to 0;
s5.2, traversing the image I 3 (x,y), if only 1 edge point exists in the eight neighborhood points of the edge point, the edge point is taken as a curve starting point, and the tracking is started by taking the starting point as a current point;
s5.3, setting a current point value as 1 in the marking matrix, traversing eight neighborhood points of the current point, wherein the number of unmarked edge points in the eight neighborhood points is N;
s5.4, if N =0, ending the current curve tracking, returning to the step S5.2, and continuously traversing the image I 3 (x, y), finding a new curve starting point;
if N =1, the next edge point is the unmarked edge point, the link values from the current point to the next edge point are recorded in the curve link code table of the current curve, the next edge point is set as the current point, and the process returns to step S5.3; the chain code value is a direction value of eight neighborhoods, the value is 0-7, and the anticlockwise direction is increased;
if N is more than or equal to 2, firstly determining the curve principal direction, if the number of the current curve pixel points is less than 10, defining the curve principal direction as the mode of the chain code values in the current curve chain code table, otherwise defining the curve principal direction as the mode of the last eight chain code values in the current curve chain code table; if the current point has an edge point in the main direction of the curve, taking the edge point as a next edge point, recording the link values from the current point to the next edge point in a curve link code table of the current curve, setting the next edge point as the current point, and returning to the step S5.3; if the current point has no edge point in the main direction of the curve, judging whether the edge point exists at the position where the chain code value in the main direction of the curve is subtracted by one, if so, recording the chain code value from the current point to the next edge point in a curve chain code table of the current curve, setting the next edge point as the current point, and returning to the step S5.3; if no edge point exists at the position where the chain code value in the main direction of the curve is subtracted by one, judging whether an edge point exists at the position where the chain code value in the main direction is added by one, if so, taking the next edge point as the edge point, recording the chain code value from the current point to the next edge point in a curve chain code table of the current curve, setting the next edge point as the current point, returning to the step S5.3, otherwise, ending the tracking of the current curve, and returning to the step S5.2;
finally obtaining a curve set C;
s6, compressing each curve in the curve set C according to a Douglas-Peucker algorithm to obtain end points of all straight line segments;
s7, defining the distance between the end points of the straight line segment as the length of the straight line, if the length of the straight line meets the requirement of a set length threshold, keeping the straight line, otherwise, discarding the straight line;
s8, calculating all pixel points on the straight line segment according to the end points of the straight line, calculating the equation and the angle of each straight line segment, and extracting the straight line segment;
s9, removing the significant interference straight line segments in the image according to the priori knowledge of the X-ray security inspection image;
and S10, outputting the remaining straight line segments, and ending the process.
2. The method according to claim 1, wherein the specific process of step S2 is:
s2.1, utilizing image local entropy formula
Figure FDA0003960041010000041
Calculating the local entropy of each pixel point of the image I (x, y), wherein omega k Is a k × k neighborhood centered on the current pixel (x, y), L is the image gray level,
Figure FDA0003960041010000042
in the neighborhood Ω for the gray level l k Probability of occurrence of, n l The number of pixels with the gray level of l;
s2.2, P-M model diffusion coefficient function is
Figure FDA0003960041010000043
Wherein | (x, y) | is the gradient of the image at the pixel point (x, y), and K is a gradient threshold; defining diffusion coefficients incorporating local entropy
Figure FDA0003960041010000044
H (x, y) represents the local entropy of the image at the pixel point (x, y);
s2.3, discretization in eight directions is adopted, and an iterative formula is as follows:
Figure FDA0003960041010000045
wherein t is the number of iterations, λ is a parameter for controlling smoothing,
Figure FDA0003960041010000046
the difference of eight directions of east, west, south, north, southeast, northeast, southwest and northwest;
thus obtaining an image I after anisotropic filtering processing based on the local entropy of the image 1 (x,y)。
3. The method according to claim 1, wherein the specific process of step S9 is:
s9.1, establishing a full 0 matrix with the same size as the original X-ray security inspection image as a background edge image;
s9.2, calculating the image I by using a Prewitt operator 1 (x, y) the gradient of each pixel point in the (x, y) is marked with 255 for pixel points with gradient larger than 150 in the background edge image;
s9.3, for image I 1 Traversing four neighborhoods of the upper, the lower, the left and the right of each pixel point in the (x, y), and marking the pixel point by 255 in the background edge image if one pixel point value in the four neighborhoods is more than 235;
s9.4, processing a background edge image by using morphological expansion;
and S9.5, traversing all the straight-line segments extracted in the step S8, and removing the straight-line segment if more than half of the pixels on the straight-line segment are located at the marking position in the background edge image.
CN202211476286.1A 2022-11-23 2022-11-23 Method for extracting straight line segment in X-ray security inspection image Pending CN115719358A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211476286.1A CN115719358A (en) 2022-11-23 2022-11-23 Method for extracting straight line segment in X-ray security inspection image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211476286.1A CN115719358A (en) 2022-11-23 2022-11-23 Method for extracting straight line segment in X-ray security inspection image

Publications (1)

Publication Number Publication Date
CN115719358A true CN115719358A (en) 2023-02-28

Family

ID=85256072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211476286.1A Pending CN115719358A (en) 2022-11-23 2022-11-23 Method for extracting straight line segment in X-ray security inspection image

Country Status (1)

Country Link
CN (1) CN115719358A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503912A (en) * 2023-06-25 2023-07-28 山东艾克斯智能科技有限公司 Security check early warning method based on electronic graph bag

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503912A (en) * 2023-06-25 2023-07-28 山东艾克斯智能科技有限公司 Security check early warning method based on electronic graph bag
CN116503912B (en) * 2023-06-25 2023-08-25 山东艾克斯智能科技有限公司 Security check early warning method based on electronic graph bag

Similar Documents

Publication Publication Date Title
CN104751142B (en) A kind of natural scene Method for text detection based on stroke feature
CN110349207B (en) Visual positioning method in complex environment
CN109785285B (en) Insulator damage detection method based on ellipse characteristic fitting
CN107045634B (en) Text positioning method based on maximum stable extremum region and stroke width
CN112488046B (en) Lane line extraction method based on high-resolution images of unmanned aerial vehicle
CN111489337B (en) Automatic optical detection pseudo defect removal method and system
CN105447489B (en) A kind of character of picture OCR identifying system and background adhesion noise cancellation method
CN106709499A (en) SIFT image feature point extraction method based on Canny operator and Hilbert-Huang transform
WO2015066984A1 (en) Complex background-oriented optical character recognition method and device
CN109712147A (en) A kind of interference fringe center line approximating method extracted based on Zhang-Suen image framework
CN112308872B (en) Image edge detection method based on multi-scale Gabor first derivative
CN115719358A (en) Method for extracting straight line segment in X-ray security inspection image
CN110687122A (en) Method and system for detecting surface cracks of ceramic tile
CN110782385A (en) Image watermark removing method based on deep learning
CN111652844B (en) X-ray defect detection method and system based on digital image region growing
CN115205560A (en) Monocular camera-based prior map-assisted indoor positioning method
CN108460735A (en) Improved dark channel defogging method based on single image
CN114862889A (en) Road edge extraction method and device based on remote sensing image
CN112200769B (en) Fixed point monitoring new and old time phase image change detection method for illegal building detection
CN104102911A (en) Image processing for AOI (automated optical inspection)-based bullet appearance defect detection system
CN111127450A (en) Bridge crack detection method and system based on image
CN111583156B (en) Document image shading removing method and system
CN112258518B (en) Sea-sky-line extraction method and device
CN111815591B (en) Lung nodule detection method based on CT image
CN110363783B (en) Rock mass structural plane trace semi-automatic detection method based on Canny operator

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination