CN113269732A - Linear object detection method based on feature scanning image - Google Patents

Linear object detection method based on feature scanning image Download PDF

Info

Publication number
CN113269732A
CN113269732A CN202110529857.2A CN202110529857A CN113269732A CN 113269732 A CN113269732 A CN 113269732A CN 202110529857 A CN202110529857 A CN 202110529857A CN 113269732 A CN113269732 A CN 113269732A
Authority
CN
China
Prior art keywords
linear
image
target
linear object
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110529857.2A
Other languages
Chinese (zh)
Other versions
CN113269732B (en
Inventor
舒婷
官辉
张翔
乔晓飞
曲飞寰
毛博石
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Zhenshi Weidu Technology Co ltd
Original Assignee
Chengdu Zhenshi Weidu Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Zhenshi Weidu Technology Co ltd filed Critical Chengdu Zhenshi Weidu Technology Co ltd
Priority to CN202110529857.2A priority Critical patent/CN113269732B/en
Publication of CN113269732A publication Critical patent/CN113269732A/en
Application granted granted Critical
Publication of CN113269732B publication Critical patent/CN113269732B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • G06T2207/10081Computed x-ray tomography [CT]

Abstract

The invention belongs to the technical field of digital image processing, and discloses a linear object detection method based on a feature scanning image.

Description

Linear object detection method based on feature scanning image
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a linear object detection method based on a feature scanning image.
Background
Image processing techniques, which use a computer to analyze an image to achieve a desired result, generally refer to digital image processing. The digital image is a large two-dimensional array obtained by shooting with equipment such as an industrial camera, a video camera, a scanner and the like, elements of the array are called pixels, the pixels contain coordinate data and gray value data, and the gray value data in the pixels are processed through software, so that the effects of coding, compressing, enhancing and restoring and the like are achieved.
The feature scan image is a data image (mainly a two-dimensional image including a three-dimensional model created from a plurality of two-dimensional images) of an internal structure obtained by scanning a target object with a special detection device, and can solve the problem that the internal structure cannot be determined by external observation. However, due to the difference of the detection devices, the characteristic value of each pixel point is also different, and the visual effect or the indirect visual effect of the calculation basis can be obtained according to the characteristic value. Since the image data are data parameters obtained according to different principles, certain errors may exist in the detection process, resulting in the problem of ambiguity or shift in the final imaging process. For the more special linear target, since the linear target is displayed by only one or a plurality of bits of pixel points in the width direction, once a certain error exists, the position drift may cause a large deviation degree with the actual object, and the subsequent processing is affected. Meanwhile, the linear target body cannot be identified, the linear target cannot be seen in the formed visual image due to the fact that partial pixel points are deviated or the characteristic values are inaccurate, and if common equipment faults and the like are eliminated, inaccurate display objects can only be corrected in an image repairing processing mode, so that accurate information is obtained.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a method for processing point cloud data acquired from a feature scanning image, and aims to reduce noise of a linear target to be determined and determine accurate pixel point data of the linear target, so that the identification indication is strengthened in subsequent visualization processing.
The technical scheme adopted by the invention is as follows:
according to the first aspect, the invention provides a linear object detection method based on a feature scanning image, the method comprises the steps of processing a corresponding digital image obtained by feature scanning to obtain the actual position of a corresponding linear object, obtaining a pixel point set A meeting prior conditions in the digital image, screening the set A through an edge detection algorithm to obtain a further accurate set B, screening the set B through a linear detection algorithm, screening and classifying pixel points belonging to the same linear object, setting a threshold value A according to the planned position of the linear object, and judging the linear object with the position parameter smaller than the threshold value A as the corresponding linear object.
Firstly, the characteristic scanning image in the invention refers to a plurality of types of scanning of the internal structure of the target object through different detection devices, the characteristic feedback information of the internal structure of the target object is obtained in an active or passive mode, and the characteristic identification of the internal structure of the target object is formed according to the determined characteristic feedback information rule, so that the characteristic scanning image of the target object is formed. The image is a digital image, namely a two-dimensional and three-dimensional point cloud picture, the point cloud picture is a set of pixel points in a certain range obtained by a determined coordinate system, the two-dimensional point cloud is two pieces of coordinate information, and the three-dimensional point cloud is three-dimensional coordinate information. Each pixel point also comprises a characteristic value, such as a gray value, and an image is formed through different gray values, so that the internal structure of the target object can be identified through the characteristic scanning image.
The algorithm of the invention is an algorithm for identifying the linear target in the characteristic scanning image, wherein the linear target represents the actual object in the object and is represented by characteristic pixel points which are obviously different from the characteristic values of the surrounding pixel points in the characteristic scanning image. Because certain scanning errors exist, related pixel point information cannot be acquired from the characteristic scanning image directly, but the method firstly filters partial pixel points by setting a proper prior condition in a first-step screening mode to form a set A, and then accurately acquires pixel points belonging to all linear targets by an edge identification algorithm to form a set B. At this time, the set a includes linear pixel points of the whole feature scan image, and although the accuracy is high, the pixel points cannot be classified to confirm the belonging linear object. The invention continues to cluster the pixel points belonging to the same linear object through a third step of screening by a linear detection algorithm to obtain a plurality of clusters. In this case, the obtained result is only the linear object in the image, but more error values not belonging to the linear object are included, and need to be eliminated.
Since rough pre-judging position information of the linear target can be known in advance during detection, a plurality of acquired clusters (namely linear objects) are screened according to the information set threshold, the linear objects which do not belong to the corresponding target are removed, so that the final accurate linear target is obtained, and the original image is modified or marked to clearly show the position of the target linear object in the formed visual image.
It should be noted that the prior condition includes various ways, which are set as the case may be, or are subjected to advance processing by a noise reduction algorithm inside the detection device, or are not set with a threshold. The target object is a known object, the specific structure of the target object is mastered before testing, and only the actual scanned image can be processed before visualization through the method, so that the relative position of the corresponding target object can be reflected more accurately during display, and meanwhile, subsequent related processing can be performed according to the acquired coordinate data.
With reference to the first aspect, the present invention provides a first implementation manner of the first aspect, wherein the linear target is a straight target, and the threshold a is set according to a direction vector L of the linear target.
It should be noted that the linear targets include various linear targets, wherein different linear targets are screened by different linear detection algorithms, and the linear targets are the simplest and convenient to set a threshold for final screening.
With reference to the first implementation manner of the first aspect, the present invention provides a second implementation manner of the first aspect, wherein the threshold a comprises a threshold a set according to a direction vector L of a linear target1And a threshold value A set according to the length of the straight line2Will simultaneously satisfy the threshold A1And a threshold value A2Is determined as a linear target.
With reference to the first aspect and the first and second embodiments thereof, the present invention provides a third embodiment of the first aspect, where the feature scan image includes a plurality of two-dimensional tomographic images obtained by scanning sequentially at equal intervals, each two-dimensional tomographic image is processed sequentially to obtain a linear object to which the feature scan image belongs, a threshold B is set, and linear objects whose distances are smaller than the threshold B after head and tail pixel points between adjacent layers are coplanar are connected to belong to a same spatial linear target.
It should be noted that, in the previous embodiment, only the two-dimensional image or the three-dimensional image is processed, and the linear target can be acquired by the first three screening processes in a single feature scan image, but the spatial linear target cannot be acquired by the above-described walking screening process directly in data constituted by a plurality of two-dimensional tomographic images. The most common scanning device in the prior art is a CT tomography device, which can perform tomography on the whole or partial structure of an object, and the obtained image is an equidistant sectional view. If the position information of the space target needs to be obtained, a three-dimensional model is generally established according to a tomography image, but the space linear target can be quickly obtained in a layer-by-layer calculation final synthesis mode under the condition of no three-dimensional modeling.
Because the interlayer spacing of the tomography is different, the continuous linear targets cannot be obtained by direct end-to-end connection, only layer-by-layer calculation is performed, the linear targets of adjacent layers are correlated, the linear targets with the distance between the head pixel point and the tail pixel point of the upper layer and the lower layer being smaller than a certain threshold value are judged as the continuous targets, and the threshold value can be set according to the interlayer spacing.
With reference to the second implementation manner of the first aspect, the present invention provides a fourth implementation manner of the first aspect, where the feature scan image includes a plurality of two-dimensional tomographic images obtained by scanning sequentially at equal intervals, after a linear target of each two-dimensional tomographic image is obtained, an end point of a linear target whose linear vector of a next layer between adjacent layers is smaller than a set threshold B is sequentially used as an update point from a starting layer to an end layer, and if there is no linear target smaller than the threshold B, a point obtained by connecting the update end point of the previous layer with the starting point of the starting layer and extending to an edge of the next layer is used as an end point; and finally, connecting the starting point of the corresponding linear target of the starting layer with the final updated key point to form a spatial linear target.
It should be noted that, since the spatial straight line target is determined, all accurate straight line targets in all two-dimensional tomographic images can be determined through the three-step processing process in combination with the threshold a and the threshold B, and then the continuous smooth spatial straight line target is calculated layer by layer and formed. Because the problem of line segment missing between adjacent fault can be encountered in the real processing process, the virtual linear target of the layer can be judged only by the updated linear extension trend of the previous layer, and the next layer of related linear target is calculated by the linear target to be connected. Finally, a nearly straight line curve with radian can be obtained through the method, so that a certain bending condition of a real straight line target possibly generated in an object is reflected.
With reference to the fourth implementation manner of the first aspect, the present invention provides a fifth implementation manner of the first aspect, where all determined update points are connected to the starting point to form a plurality of spatial line segments, which are used as the change path identifiers.
With reference to the fourth implementation manner of the first aspect, the present invention provides a sixth implementation manner of the first aspect, and the line detection algorithm is an LSD line detection algorithm.
The LSD straight line detection algorithm firstly calculates the gradient size and direction of all points in an image, then takes the points with small gradient direction change and adjacent points as a connected domain, then judges whether the points need to be disconnected according to rules according to the rectangularity of each domain to form a plurality of domains with larger rectangularity, finally improves and screens all the generated domains, and reserves the domain meeting the conditions, namely the final straight line detection result. The algorithm has the advantages of high detection speed, no need of parameter adjustment and utilization of an error control method to improve the accuracy of linear detection.
The LSD algorithm obtains a linear pixel point set through local analysis of the image, then carries out verification solving through hypothesis parameters, and combines the pixel point set and the error control set so as to adaptively control the number of false detections. Generally, to detect a straight line in an image, the most basic idea is to detect a set of pixel points in the image with large gradient changes.
With reference to the fourth implementation manner of the first aspect, the present invention provides a seventh implementation manner of the first aspect, and the line detection algorithm is a hough transform line detection algorithm.
First, a tomographic image refers to image information obtained by precisely scanning a living body or an apparatus through a special apparatus, and the image information is pixel point data all over the entire scanning area including the living body or the apparatus. Each pixel point data contains color data for determining whether the pixel point belongs to the effective point and coordinate data for acquiring the spatial position of the particle.
The coordinate data is data information acquired by taking a determined unique coordinate axis as a standard, the position information of each particle is represented by a plurality of pixel points in a tomography image, the coordinate mean value is calculated by each pixel point so as to acquire a centroid coordinate, and the coordinate data of the centroid is taken as the position information of the radioactive particles.
With reference to the fourth implementation manner of the first aspect, the present invention provides an eighth implementation manner of the first aspect, wherein in a specific process of performing edge detection on a two-dimensional tomographic image, a gaussian smoothing process is first performed on a feature scan image, where an operator of the gaussian smoothing process is as follows:
Figure BDA0003066992920000061
and then, respectively calculating in the x direction and the y direction of the image by using a sobel operator, wherein the calculation operator is as follows:
Figure BDA0003066992920000062
Figure BDA0003066992920000063
and then calculating the gradient change of the pixels in each direction on the image, specifically as follows:
dx=f(x,y)*Sobelx(x,y)
dy=f(x,y)*Sobely(x,y);
then calculating the gradient amplitude of each pixel point by the following formula;
Figure BDA0003066992920000064
and the gradient direction of each pixel is obtained:
θM=arctan(dy/dx);
and then carrying out non-maximum value inhibition according to the angle, and finally adopting a double-threshold algorithm to detect and connect edges.
The invention has the beneficial effects that:
according to the invention, linear targets which cannot be clearly seen and positioned in subsequent visual display due to insufficient definition or offset in the feature scanning images can be processed through multiple steps, so that accurate pixel point position information contained in the linear targets can be determined, an image group formed by multiple two-dimensional tomography images can be processed, a space linear target can be obtained, and accurate target positioning data can be provided for subsequent visual display or other processes.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
The invention is further explained below with reference to the drawings and the specific embodiments.
The following detailed description of the embodiments of the present application, presented in the figures, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Example 1:
the embodiment discloses a method for processing a two-dimensional feature scanning image acquired by carrying out tomography on a target object of a non-living body, wherein a linear target in a figure can be screened out through multiple screening and noise reduction processes, and a highlighted mark is carried out during subsequent visual display, so that the linear target is convenient to view and position.
Specifically, as shown in fig. 1, a plurality of strip-shaped structures are inserted into the target object, and the strip-shaped structures are inserted inwards at fixed angles at fixed positions according to design requirements.
The characteristic scanning image obtained by the tomography equipment has the scanning layer plane which is just coplanar with the axis of the inserted bar-shaped structure, and each two-dimensional characteristic scanning image obtained can contain at least one complete bar-shaped structure with the most effective section information.
The image is processed by a conventional image noise reduction algorithm preset in the scanning equipment, and pixels meeting the prior condition are screened out to be used as an initial processing object.
The first step is as follows: firstly, a pixel set A with edge characteristics is obtained through an edge detection algorithm, wherein the set A comprises a plurality of pixel domains, but each pixel domain is not only a linear object, but also comprises other objects with obvious boundary characteristics.
A second part: and eliminating the linear objects in the set A by a linear detection algorithm, and reserving all pixel points of the linear objects, thereby forming a set B. At this time, the linear objects in the set B are not matched with the actual objects, and may also include some interfering linear objects of the target object itself.
A third part: calculating the planned position and angle of the strip-shaped structure to obtain the predicted insertion length and the vector L in the corresponding two-dimensional image coordinate, and then setting a threshold A according to the length2Setting the threshold A according to the vector L1The length in the set B is smaller than the threshold value A2And the deviation of the vector angle and the vector L is less than a threshold value A1The object is retained and matched, thereby acquiring the actual position information of the bar-shaped structure.
Example 2:
the embodiment is also a method for processing a two-dimensional feature scan image acquired by tomography of a target object of a non-living body, and the linear target in the image can be screened out through multiple screening and noise reduction processes, and highlighted during subsequent visual display, so that the linear target is convenient to view and position.
Implementation procedure
The first step is as follows: canny edge detection
1. Gaussian smoothing
The purpose of gaussian smoothing is to construct a convolution kernel satisfying two-dimensional gaussian distribution, set the coordinate of the central point in the matrix as (0,0), then each corresponding point has a corresponding coordinate value, then give a standard interpolation sigma (the larger the value is, the better the smoothing effect is), calculate the pixel value of each coordinate through a two-dimensional gaussian distribution formula, and finally calculate the whole image by taking the convolution kernel as a mask, i.e. do the processing on each pixel, thereby smoothing the image.
2. sobel operator calculates gradient amplitude and direction
The Sobel operator is a discrete difference operator, and is used for calculating a gray value approximate value of an image brightness function, and the gray vector of any point on the image can be obtained by calculating the operator at the point.
Calculating the gradient change of the pixel in each direction on the image, wherein f (a, b) represents the gray value at the point (a, b), and then calculating the gradient amplitude of each pixel point, so that the gradient direction of each pixel can be obtained.
3. Non-maxima suppression by angle
And X is a certain pixel point on the image, the size of X and two adjacent pixel values is calculated along the gradient direction calculated in the last step, if X is the maximum value, the value is reserved, otherwise, the value is set to be 0, so that the non-maximum value is restrained, the point with the maximum local gradient is reserved, and the edge refinement is obtained.
4. Dual threshold algorithm detection and connection edges
Two thresholds were chosen, one TH, one TL, general TH: TL 2:1 or 3: 1. discarding the points smaller than the low threshold value, and setting the points to be 0; and determining the points which are larger than the high threshold value as edge points, and setting the edge points as 1 or 255. Other points are connected to the TH pixel and will be edge points, set to 1 or 255.
The second step is that: hough line detection
The edge of the image to be detected is extracted through the previous step, and a binary image is generated, namely the object to be detected is detected through Hough line detection. Screening out straight lines meeting the prior condition in the image layer by a Hough straight line detection method, taking the end point of the straight line detected in the first layer as the starting point of the straight line detection of the next layer, screening all the straight lines detected in the next layer within a certain angle range, and continuously carrying out the step until all the straight lines in the whole three-dimensional area spanned by the needle track are detected; then, the straight lines detected in the angle range are converted into a vertical plane, and the co-vertical plane needle clustering is performed in the range.
The principle of Hough detection:
two points in image space define a straight line which is mapped to the parameter space (i.e. the coordinate system with r and theta as coordinate axes) in the following way. Points in image space correspond to sinusoids in parameter space, and the intersections of sinusoids in parameter space correspond to lines in image space. It is possible to infer that the more intersections in a region in the parameter space, the more likely the points in the image space corresponding to the sinusoids in the region are on a straight line.
Therefore, according to the inference, θ is divided into four intervals according to the angular range, r is divided into a plurality of intervals according to a certain value (r in the image is the maximum length of the diagonal line of the image), then each point is traversed on the binary image generated before, when the edge points, namely the points with the value of 1 or 255, one r is obtained for θ in each range, r belongs to the interval, 1 is added in the corresponding grid, then the next r obtaining for θ is carried out, the traversal is continuously carried out, all the conditions of all the edge points are known to be obtained and voted, finally a table is generated, then the points in the area with the highest score are counted, and the points can belong to a straight line in the image space.
By this method, all lines in the image that approximate straight lines are obtained.
The third step: sector area filter line
The method acquires the approximate position of the planned bar-shaped structure firstly, so that the detection range is limited, and other noise straight lines are filtered.
Let the starting point of the prior straight line object be A (x)1,y1) End point is B (x)2,y2) Then, the direction vector AB ═ x can be obtained2-x1,y2-y1)。
1. Solving a direction vector L of the linear objects detected in all the pictures, solving the degree of an included angle between the AB and the L and the length of the linear object, and considering the linear object as a linear object meeting the requirement if the included angle is less than 10 degrees and the length is greater than a certain threshold value;
2. and then, the layer which meets the condition straight line for the first time is detected as an S layer, the layer which meets the condition straight line for the last time is detected as an E layer, and the filtering of the sector area is carried out between the S layer and the E layer.
3. The starting point of the S-th layer is first considered as the starting point of the entire needle. And then, if the included angle between the linear vector of the S +1 th layer and the linear vector of the S +1 th layer is within 5 degrees, the strip-shaped structure is considered to be developed on different layers, the end point of the linear meeting the requirement of the S +1 th layer is updated to the end point of the whole strip-shaped structure, and then filtering of the S +1 th layer by the S +2 th layer is carried out until the E-th layer is detected.
4. Finally, the starting point is connected with all the alternative end points, and a curve with radian and approximate to a straight line is formed in the plane, so that the problem of possible bending deformation of the strip-shaped structure in the insertion process is solved.
5. Meanwhile, the position of the final end point is searched for the brightest value of the area, the position of the end head is determined, and the constructed space vector can be used for judging the deviation from the planned insertion direction.
The present invention is not limited to the above-described alternative embodiments, and various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.

Claims (9)

1. A linear object detection method based on feature scanning images obtains the actual position of a corresponding linear object by processing corresponding digital images obtained by feature scanning, and is characterized in that: the method comprises the steps of obtaining a pixel point set A meeting prior conditions in a digital image, screening the set A through an edge detection algorithm to obtain a further accurate set B, screening the set B through a linear detection algorithm, screening and classifying pixel points belonging to the same linear object, setting a threshold value A according to the planned position of the linear object, and judging that the position parameter of the classified linear object is smaller than the threshold value A as a corresponding linear object.
2. The linear object detection method based on the feature scan image as claimed in claim 1, wherein: the linear target is a straight target, and the threshold value A is set according to the direction vector L of the linear target.
3. A feature scan image based linear object as claimed in claim 2A method of physical detection, comprising: the threshold A comprises a threshold A set according to a direction vector L of a linear target1And a threshold value A set according to the length of the straight line2Will simultaneously satisfy the threshold A1And a threshold value A2Is determined as a linear target.
4. A method for linear object detection based on feature scan images according to any of claims 1-3, characterized by: the characteristic scanning image comprises a plurality of two-dimensional tomography images which are obtained by scanning in sequence at equal intervals, each two-dimensional tomography image is processed in sequence to obtain a linear object, a threshold value B is set, and the linear objects with the distances smaller than the threshold value B after head and tail pixel points between adjacent layers are coplanar are connected to belong to a same space linear target.
5. A method for linear object detection based on feature scan image according to claim 3, wherein: the characteristic scanning image comprises a plurality of two-dimensional tomographic scanning images obtained by scanning in sequence at equal intervals, after the linear target of each two-dimensional tomographic scanning image is obtained, the end point of the linear target of which the linear vector of the next layer between the adjacent layers is smaller than a set threshold B is sequentially used as an updating point from the starting layer to the end layer, and if the linear target smaller than the threshold B does not exist, the point obtained by connecting the updating end point of the previous layer with the starting point of the starting layer and extending to the edge of the next layer is used as the end point; and finally, connecting the starting point of the corresponding linear target of the starting layer with the final updated key point to form a spatial linear target.
6. The linear object detection method based on the feature scan image as claimed in claim 5, wherein: and connecting all the determined update points with the starting point to form a plurality of space line segments as the change path identification.
7. The linear object detection method based on the feature scan image as claimed in claim 5, wherein: the line detection algorithm is an LSD line detection algorithm.
8. The linear object detection method based on the feature scan image as claimed in claim 5, wherein: the straight line detection algorithm is a Hough transform straight line detection algorithm.
9. The linear object detection method based on the feature scan image as claimed in claim 5, wherein: the specific process of edge detection on the two-dimensional tomography image is as follows, firstly, the feature scanning image is subjected to Gaussian smoothing, wherein an operator of the Gaussian smoothing is as follows:
Figure FDA0003066992910000021
and then, respectively calculating in the x direction and the y direction of the image by using a sobel operator, wherein the calculation operator is as follows:
Figure FDA0003066992910000022
Figure FDA0003066992910000023
and then calculating the gradient change of the pixels in each direction on the image, specifically as follows:
dx=f(x,y)*Sobelx(x,y)
dy=f(x,y)*Sobely(x,y);
then calculating the gradient amplitude of each pixel point by the following formula;
Figure FDA0003066992910000031
and the gradient direction of each pixel is obtained:
θM=arctan(dy/dx);
and then carrying out non-maximum value inhibition according to the angle, and finally adopting a double-threshold algorithm to detect and connect edges.
CN202110529857.2A 2021-05-14 2021-05-14 Linear object detection method based on characteristic scanning image Active CN113269732B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110529857.2A CN113269732B (en) 2021-05-14 2021-05-14 Linear object detection method based on characteristic scanning image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110529857.2A CN113269732B (en) 2021-05-14 2021-05-14 Linear object detection method based on characteristic scanning image

Publications (2)

Publication Number Publication Date
CN113269732A true CN113269732A (en) 2021-08-17
CN113269732B CN113269732B (en) 2024-03-29

Family

ID=77231076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110529857.2A Active CN113269732B (en) 2021-05-14 2021-05-14 Linear object detection method based on characteristic scanning image

Country Status (1)

Country Link
CN (1) CN113269732B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724248A (en) * 2021-09-16 2021-11-30 北京航空航天大学 Medical image needle track detection method and device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6259809B1 (en) * 1997-08-29 2001-07-10 Advantest Corporation System and method for recognition of image information
CN102156884A (en) * 2011-04-25 2011-08-17 中国科学院自动化研究所 Straight segment detecting and extracting method
CN106127778A (en) * 2016-06-27 2016-11-16 安徽慧视金瞳科技有限公司 A kind of line detection method for projecting interactive system
US20190197340A1 (en) * 2016-01-15 2019-06-27 Wuhan Wuda Zoyon Science And Technology Co., Ltd. Object surface deformation feature extraction method based on line scanning three-dimensional point cloud
CN110428433A (en) * 2019-07-02 2019-11-08 西华师范大学 A kind of Canny edge detection algorithm based on local threshold
CN111241911A (en) * 2019-12-11 2020-06-05 华侨大学 Self-adaptive lane line detection method
US20200294258A1 (en) * 2019-03-13 2020-09-17 Fujitsu Limited Image processing apparatus and image processing method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6259809B1 (en) * 1997-08-29 2001-07-10 Advantest Corporation System and method for recognition of image information
CN102156884A (en) * 2011-04-25 2011-08-17 中国科学院自动化研究所 Straight segment detecting and extracting method
US20190197340A1 (en) * 2016-01-15 2019-06-27 Wuhan Wuda Zoyon Science And Technology Co., Ltd. Object surface deformation feature extraction method based on line scanning three-dimensional point cloud
CN106127778A (en) * 2016-06-27 2016-11-16 安徽慧视金瞳科技有限公司 A kind of line detection method for projecting interactive system
US20200294258A1 (en) * 2019-03-13 2020-09-17 Fujitsu Limited Image processing apparatus and image processing method
CN110428433A (en) * 2019-07-02 2019-11-08 西华师范大学 A kind of Canny edge detection algorithm based on local threshold
CN111241911A (en) * 2019-12-11 2020-06-05 华侨大学 Self-adaptive lane line detection method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113724248A (en) * 2021-09-16 2021-11-30 北京航空航天大学 Medical image needle track detection method and device
CN113724248B (en) * 2021-09-16 2024-03-26 北京航空航天大学 Medical image needle track detection method and device

Also Published As

Publication number Publication date
CN113269732B (en) 2024-03-29

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN109615611B (en) Inspection image-based insulator self-explosion defect detection method
JP6348093B2 (en) Image processing apparatus and method for detecting image of detection object from input data
CN107203973B (en) Sub-pixel positioning method for center line laser of three-dimensional laser scanning system
JP5713159B2 (en) Three-dimensional position / orientation measurement apparatus, method and program using stereo images
Wang et al. Outlier detection for scanned point clouds using majority voting
CN109583365B (en) Method for detecting lane line fitting based on imaging model constrained non-uniform B-spline curve
JP5538868B2 (en) Image processing apparatus, image processing method and program
JP6099479B2 (en) Crack detection method
CN107346041B (en) Method and device for determining grating parameters of naked eye 3D display equipment and electronic equipment
CN101853524A (en) Method for generating corn ear panoramic image by using image sequence
WO2005076214A1 (en) Method and system for image processing for profiling with uncoded structured light
CN108369737B (en) Using heuristic graph search to segment layered images quickly and automatically
JP2010121992A (en) Crack detecting method
CN114359277B (en) Brain image processing method and system for stroke patient
JP2018128309A (en) Crack detection method
US20160196657A1 (en) Method and system for providing depth mapping using patterned light
CN113269732B (en) Linear object detection method based on characteristic scanning image
Liu et al. Extract feature curves on noisy triangular meshes
CN110223356A (en) A kind of monocular camera full automatic calibration method based on energy growth
CN111354047A (en) Camera module positioning method and system based on computer vision
Damjanović et al. Local stereo matching using adaptive local segmentation
Boerner et al. Brute force matching between camera shots and synthetic images from point clouds
CN113780421B (en) Brain PET image identification method based on artificial intelligence
CN113744416B (en) Global point cloud filtering method, equipment and storage medium based on mask

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant