CN113673515A - Computer vision target detection algorithm - Google Patents

Computer vision target detection algorithm Download PDF

Info

Publication number
CN113673515A
CN113673515A CN202110959705.6A CN202110959705A CN113673515A CN 113673515 A CN113673515 A CN 113673515A CN 202110959705 A CN202110959705 A CN 202110959705A CN 113673515 A CN113673515 A CN 113673515A
Authority
CN
China
Prior art keywords
edge
image
target image
points
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110959705.6A
Other languages
Chinese (zh)
Inventor
罗潇
丁雷青
李晓莉
彭勇
王建军
高敬贝
吴奕锴
於锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nantong University
State Grid Shanghai Electric Power Co Ltd
Original Assignee
Nantong University
State Grid Shanghai Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nantong University, State Grid Shanghai Electric Power Co Ltd filed Critical Nantong University
Priority to CN202110959705.6A priority Critical patent/CN113673515A/en
Publication of CN113673515A publication Critical patent/CN113673515A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to a computer vision target detection algorithm, which is used for detecting a target object in a target image and comprises the following steps: s1, performing graying processing on the source image and the target image to generate a grayscale source image and a grayscale target image; s2, extracting edge information of the gray level source image to generate an edge source image; s3, extracting edge information of the gray target image to generate an edge target image; s4, suppressing the noise of the edge target image, and generating a filtering edge target image; s5, extracting feature points of the edge source image and the filtering edge target image, and matching the feature points to obtain matching points; and S6, eliminating error points in the matching points to obtain accurate matching points and the positions of the target objects in the target image. The method is improved aiming at the problems in the aspect of feature point matching in the prior art, improves the identification accuracy, has stronger robustness and higher running speed, and is suitable for detecting the defects of the U-shaped hanging ring image of the power transmission line.

Description

Computer vision target detection algorithm
Technical Field
The invention relates to the field of image processing and computer vision, in particular to a computer vision target detection algorithm applied to the field of unmanned aerial vehicle inspection operation of a power system.
Background
The U-shaped suspension loop in the power transmission line is an important component of a power system, is used for changing angles and lengthening a tension insulator, is usually exposed in a complex external environment, has a lot of uncertain factors to damage the tension insulator, and is essential for regular inspection. The traditional mode is manual inspection operation, but is greatly influenced by random factors such as weather, landform, human nature and the like, and the cost is high. In recent years, unmanned aerial vehicle technology is rapidly developed, and unmanned aerial vehicle inspection operation becomes one of the means of power transmission line inspection.
The operation method of the unmanned aerial vehicle inspection operation is that a small high-definition camera is carried on the unmanned aerial vehicle, the unmanned aerial vehicle flies along a power transmission line and shoots high-definition pictures, and then the images are processed and analyzed to find equipment defects. The premise of processing and analyzing the image is that subsequent target object defect detection can be performed only if the target object and the position of the target object in the image are to be accurately identified, so that a key step is how to accurately identify the U-shaped hanging ring in the image and locate the specific position of the U-shaped hanging ring.
At present, the mainstream algorithm of the target detection technology is the speed-Up Robust Features algorithm (image recognition algorithm) which has the advantages of high recognition speed and strong real-time performance, but also has the disadvantages of poor recognition effect and low accuracy rate under the condition of a large number of noisy points, and the inspection of the power transmission line is an outdoor complex environment operation, and weather or natural background interference often exists, so that the recognition rate is not high due to a large number of error points when the SURF algorithm is used, and the requirement of the inspection operation of the unmanned aerial vehicle cannot be met.
Disclosure of Invention
The invention aims to provide a computer vision target detection algorithm, which aims at solving the problems of the currently adopted SURF algorithm in the aspect of feature point matching, improves the recognition rate and realizes quick and accurate recognition and detection.
In order to achieve the purpose, the invention is realized by the following technical scheme:
a computer vision target detection algorithm for detecting a target object in a target image, comprising the steps of:
s1, performing graying processing on both a source image only containing the target object and the target image to be detected to generate a grayscale source image and a grayscale target image;
s2, extracting edge information of the gray level source image to generate an edge source image;
s3, extracting edge information of the gray target image to generate an edge target image;
s4, suppressing the noise of the edge target image, and generating a filtering edge target image;
s5, extracting the feature points of the edge source image and the filtering edge target image, and matching the feature points to obtain matching points;
s6, eliminating error points of the matching points of the edge source image and the filtering edge target image to obtain accurate matching points and the position of the target object in the target image.
Preferably, step S2 is implemented based on a Canny edge detector (Canny edge detector, a multi-level edge detector), and includes the steps of:
s21, smoothing the gray level source image based on a Gaussian filter to generate a smoothed gray level source image;
s22, calculating the image gradient amplitude and direction of the smooth gray level source image based on finite difference operation of first-order partial derivatives;
s23, performing non-maximum suppression on the image gradient amplitude of the smooth gray level source image based on a non-maximum suppression algorithm, calculating the gradient amplitude of the edge, reserving the local maximum gradient value and suppressing other gradient values, thereby removing non-edge points and refining the edges with a plurality of pixel widths into an edge with one pixel width;
and S24, carrying out image edge detection and connection on the smooth gray level source image based on a double-threshold algorithm and a hysteresis boundary tracking algorithm, and converting the smooth gray level source image into an edge source image.
Preferably, step S3 is implemented based on Roberts edge detection operator (Roberts edge detection operator, an operator for detecting edge by using local difference operator), using a 2 pixel × 2 pixel template, and using the difference between two diagonally adjacent pixels to approximate the gradient amplitude to detect the edge line, and the calculation formula is:
Figure BDA0003221766920000021
wherein x and y are pixel coordinate values, s (x and y) is an unprocessed image pixel, and g (x and y) is a processed image pixel.
Preferably, step S4 is implemented based on a median filtering algorithm, and the calculation method is as follows: replacing the gray value of a certain pixel point of the edge target image by the median of the gray values of all pixel points in a neighborhood of 5 pixels multiplied by 5 pixels of the pixel point, so that the gray value of the pixel point is close to the true value; and traversing and calculating each pixel point in the edge target image, thereby eliminating isolated noise points.
Preferably, step S5 is implemented based on SURF algorithm, and includes the steps of:
s51, respectively taking the edge source image and the filtering edge target image as input images, and constructing respective Hessian matrixes (a black plug matrix, a square matrix formed by second-order partial derivatives of a multivariate function and used for describing local curvature of the function);
s52, respectively taking the edge source image and the filtering edge target image as input images to construct respective scale spaces;
s53, respectively taking the edge source image and the filtering edge target image as input images, and determining respective feature points;
s54, respectively taking the edge source image and the filtering edge target image as input images, and determining the main directions of the respective feature points;
s55, respectively taking the edge source image and the filtered edge target image as input images to generate respective feature point descriptors;
and S56, taking the edge source image and the filtered edge target image as input images, and determining the matching points of the two images through the judgment of Euclidean distance and Hessian matrix trace.
Preferably, step S6 is implemented based on RANSAC algorithm (random sample consensus algorithm, a classical data filtering and parameter fitting algorithm), and includes the steps of:
s61, initializing, setting the iteration number d to 0, the upper limit k to 0, the optimal set of non-error points to U _ best, the temporary set of non-error points to U, and the initial values of U _ best and U to null;
s62, randomly selecting four groups of data from the matching points, and establishing a data model M;
s63, calculating projection errors of all matching points and M, and if the errors are smaller than a threshold value t, adding the matching points into an inner point set U;
s64, comparing the number of the matching points in U and U _ best, if U is more than the number of the matching points in U _ best, updating U _ best to U, and updating the value of the upper limit k of the iteration times;
s65, comparing the iteration number d with the upper limit k of the iteration number,
if d is less than or equal to k, the iteration number d is d +1, and the step returns to S61;
otherwise, jumping out of iteration;
s66, U _ best is a set of accurate matching points after the error points are removed, the edge source image in the U _ best and the accurate matching points of the filtering edge target image are connected correspondingly, so that the position of the target object in the filtering edge target image is obtained, and the position of the target object in the target image is further obtained.
In summary, compared with the prior art, the computer visual target detection algorithm provided by the invention has the following beneficial effects:
1. according to the method, the processing steps of edge detection and noise suppression by filtering are added before the characteristic points are extracted, so that the noise is better eliminated in the gray scale range, the interference on the target object identified in the later period is reduced, and the matching with high accuracy is realized;
2. after the step of extracting the characteristic points, the method adds the operation of eliminating error points through the RANSAC algorithm, and can quickly and accurately detect the position of the target object;
3. the method has the advantages of high recognition rate, simple principle, less calculation amount, higher accuracy, stronger robustness and high running speed, and is suitable for real-time detection.
Drawings
FIG. 1 is a flow chart of a computer vision target detection algorithm of the present invention;
FIG. 2 is a schematic diagram of the SURF algorithm of the present invention for determining feature points in a scale space;
FIG. 3 is a schematic diagram of the SURF algorithm for determining the principal directions of feature points according to the present invention;
FIG. 4 is a schematic diagram of the SURF algorithm generating feature point descriptor in accordance with the present invention.
Detailed Description
The computer vision target detection algorithm provided by the invention is further explained in detail in the following with reference to the attached drawings and the detailed description. The advantages and features of the present invention will become more apparent from the following description. It should be noted that the drawings are simplified in form and not to precise scale, and are only used for convenience and clarity to assist in describing the embodiments of the present invention, but not for limiting the conditions of the embodiments of the present invention, and therefore, the present invention is not limited by the technical spirit, and any structural modifications, changes in the proportional relationship, or adjustments in size, should fall within the scope of the technical content of the present invention without affecting the function and the achievable purpose of the present invention.
It is to be noted that, in the present invention, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
With reference to fig. 1 to 4, the present embodiment provides a computer vision target detection algorithm, which is used to identify a target object (in the present embodiment, the target object is a U-shaped suspension loop) and locate the position of the target object in a target image when an unmanned aerial vehicle in a power line patrols and examines. The operation object of the algorithm comprises a source image and a target image, wherein the source image is a single image only containing a target object to be recognized, the target image is an image shot by the unmanned aerial vehicle, and the target object in the source image is recognized and positioned in the target image through calculation.
The specific steps of the algorithm provided by the embodiment are shown in the attached figure 1:
1. reading in a source image and a target image, carrying out graying processing on the source image and the target image, and expressing each pixel point in an original color image by using a gray value of 0-255, wherein the step is to prepare for subsequent operations such as image edge extraction, feature point extraction and the like, and the calculation formula is as follows:
Gray=R×0.299+G×0.587+B×0.114;
wherein, Gray is a Gray value of a certain pixel after conversion; r, G, B are the brightness values of the red (red), green (green), and blue (blue) channels, respectively, before the pixel is inverted.
And generating a gray level source image and a gray level target image through operation.
2. Extracting edge information of the gray level source image based on a Canny edge detection operator to generate an edge source image:
the edge information of the image refers to local discontinuous image features, namely parts with obvious brightness changes in local areas of the image, and for gray level images, namely parts with rapid changes of gray level values in small buffer areas. The calculation steps are as follows:
(1) smoothing the gray level source image based on a Gaussian filter to generate a smooth gray level source image;
(2) calculating the image gradient amplitude and direction of the smooth gray level source image based on finite difference operation of first-order partial derivatives;
(3) performing non-maximum suppression on the image gradient amplitude of the smooth gray level source image based on a non-maximum suppression algorithm, calculating the gradient amplitude of the edge, reserving the local maximum gradient value and suppressing other gradient values, thereby removing non-edge points and refining the edges with a plurality of pixel widths into an edge with a pixel width;
(4) and based on a double-threshold algorithm and a hysteresis boundary tracking algorithm, carrying out image edge detection and connection on the smooth gray level source image, and converting the smooth gray level source image into an edge source image.
And generating an edge source image through operation.
3. Extracting the edge information of the gray target image based on a Roberts edge detection operator to generate an edge target image:
the main purpose of this step is to simplify the image information by using the edge lines to represent the information carried by the gray target image, and to prepare for the subsequent operation of extracting the feature points. The calculation method comprises the following steps:
the method comprises the following steps of utilizing a 2-pixel-by-2-pixel template, adopting the difference between two adjacent diagonal pixels to approximate the gradient amplitude value to detect and connect edge lines, and calculating according to the following formula:
Figure BDA0003221766920000061
wherein x and y are pixel coordinate values, s (x and y) is an unprocessed image pixel, and g (x and y) is a processed image pixel.
And generating an edge target image through operation.
4. Suppressing the noise of the edge target image based on a median filtering algorithm, and generating a filtering edge target image:
replacing the gray value of a certain pixel point of the edge target image by the median of the gray values of all pixel points in a neighborhood of 5 pixels multiplied by 5 pixels of the pixel point, so that the gray value of the pixel point is close to the true value; and traversing and calculating each pixel point in the edge target image, thereby eliminating isolated noise points.
And generating a filtering edge target image through operation.
5. And extracting the feature points of the edge source image and the filtering edge target image based on a SURF algorithm, and matching the feature points to obtain the matching points of the two images. The calculation steps are as follows:
(1) respectively taking the edge source image and the filtering edge target image as input images, and constructing respective Hessian matrixes:
the Hessian matrix is a square matrix formed by second-order partial derivatives of a multivariate function, the local curvature of the function is described, and each pixel point of an input image can construct the Hessian matrix. In order to enable the extracted feature points to have scale independence, Gaussian filtering needs to be carried out on an input image before Hessian matrix construction is carried out, and a box filter is used for approximately replacing the Gaussian filter in order to improve the operation speed; after gaussian filtering, the Hessian matrix expression of an arbitrary pixel (x, y) of the input image is:
Figure BDA0003221766920000071
where σ denotes the filter dimension, Lxx(x, σ) denotes the second derivative of the filtered image in the x-direction, Lyy(x, σ) denotes the second derivative of the filtered image in the y-direction, Lxy(x, σ) means that the filtered image is subjected to first order partial derivative calculation in the x direction and then to second order partial derivative calculation in the y direction;
further, the matrix discriminant det (H) is obtained as:
det(H)=Lxx(x,σ)×Lyy(x,σ)-(0.9×Lxy(x,σ))2
where 0.9 is a weighting coefficient for balancing an error caused by approximating a gaussian filter using a box filter.
(2) Respectively taking the edge source image and the filtering edge target image as input images, and constructing respective scale spaces: constructing a pyramid-shaped scale space by changing the size of a box filter, wherein the box filter adopts Gaussian second-order differential filtering, the sizes of images among different groups are consistent, and the size of a template using the filter is gradually increased; the same set of images of different layers use the same size of filter, and finally the 3 sets of the scale space of 12 layers of the input image are obtained.
(3) Respectively taking the edge source image and the filtering edge target image as input images, and determining respective feature points: as shown in fig. 2, in the scale space corresponding to the input image, comparing the matrix discriminant det (h) value of each pixel point of the input image subjected to the Hessian matrix processing with 26 adjacent pixel points of three rows and three columns in the three layers in the neighborhood of the scale space, and when the Hessian matrix discriminant det (h) of the current pixel point is a local maximum value, determining that the current pixel point is a brighter or darker point than other pixel points in the surrounding neighborhood, thereby locating the position of the feature point; and deleting the characteristic points with weak response and wrong positioning, and screening out the final stable characteristic points.
(4) Respectively taking the edge source image and the filtering edge target image as input images, and determining the main directions of the respective feature points: as shown in fig. 3, the haar wavelet (haar wavelet, an orthogonal normalized attenuation waveform) feature in the circular neighborhood of the feature point of the input image is counted, the sectors are rotated at intervals of 0.2 radian, the haar wavelet feature value in the region is counted again, and the direction of the sector with the largest value is taken as the main direction of the feature point of the input image.
(5) Respectively taking the edge source image and the filtering edge target image as input images, and generating respective feature point descriptors: taking a rectangular area block around the feature point, wherein the side length of the rectangle is 20s, s is the scale of the detected feature point, and the taken rectangular area block has a direction which is the main direction of the feature point; then, dividing the rectangular area block into 16 sub-areas with 4 × 4, and counting haar wavelet characteristics of 25 pixels in the horizontal direction and the vertical direction, which are relative to the direction of the rectangular area block, for each sub-area, as shown in fig. 4, the haar wavelet characteristics of the sub-area are 4 directions of the sum of the horizontal direction value, the vertical direction value, the horizontal direction absolute value and the vertical direction absolute value; haar wavelet features of 4 × 4-16 sub-regions are calculated one by one and combined to generate feature point descriptors of 4 × 4 × 4-64-dimensional vectors.
(6) And taking the edge source image and the filtered edge target image as input images, and determining the matching points of the two images through the judgment of Euclidean distance and Hessian matrix trace: calculating Euclidean distances of feature point descriptors of two input images, and determining matching degree; meanwhile, judging Hessian matrix traces of the feature points of the two images, and if the signs of the matrix traces of the two feature points are the same, representing that the two features have contrast changes in the same direction; if the contrast ratio is different, the contrast ratio change directions of the two characteristic points are opposite, and even if the Euclidean distance is 0, the contrast ratio change directions are excluded; and finally screening to obtain the required matching points by judging whether the signs of the Hessian matrix traces of the characteristic points are the same or not.
6. Taking the matching points of the edge source image and the filtering edge target image as input data, and eliminating error points in the matching points based on a RANSAC algorithm to obtain accurate matching points and specific positions of a target object in the target image:
(1) initializing, setting the operation iteration number d to be 0, setting the upper limit k of the operation iteration number to be 0, setting the optimal set of non-error points to be U _ best, setting the temporary set of the non-error points to be U, and setting the initial values of U _ best and U to be null;
(2) randomly selecting four groups of characteristic point pairs from the matching points, establishing an equation for each group of matched characteristic point pairs, and calculating a transformation matrix H through matrix inverse transformation:
Figure BDA0003221766920000081
wherein, (x, y) is the angular point position of the source image, (x ', y') is the angular point position of the target image, h11-h33Is an element in the transformation matrix H, and s is a scale parameter;
taking the transformation matrix H as a data model M, wherein the expression is as follows:
Figure BDA0003221766920000082
(3) calculating projection errors of all matching points and M, if the errors are smaller than a threshold value t, adding the matching points into U, wherein the threshold value t is an empirical value and is 3 in the embodiment;
(4) and comparing the number of the matching points in the U and the U _ best, if the number of the matching points in the U is more than that in the U _ best, updating the U _ best to be U, and updating the value of an upper limit k of the iteration times, wherein the calculation formula is as follows:
Figure BDA0003221766920000091
wherein, p is confidence coefficient, and generally takes a value of 0.995; w is the proportion of the number of the matching points in the U _ best to the number of all the matching points; m is the minimum number of samples required by the calculation model, and the value is 4;
(5) comparing the iteration number d with an iteration number upper limit k:
if d is less than or equal to k, the iteration number d is d +1, and the step 6- (2) is returned;
otherwise, jumping out of iteration;
(6) at this time, U _ best is a set of accurate matching points from which the error points are removed, and the edge source image and the accurate matching points of the filtering edge target image are connected to obtain the position of the target object in the filtering edge target image, so as to obtain the position of the target object in the target image.
In summary, the computer visual target detection algorithm provided by the invention has the advantages that the processing steps of edge detection and noise suppression by filtering are added to the image before the feature points are extracted, so that the noise is better eliminated in the gray scale range, the interference to the later stage identification target object is reduced, and the matching with high accuracy is realized; the position of the target object can be quickly and accurately detected by adding the operation of eliminating error points through the RANSAC algorithm after the step of extracting the feature points; the invention realizes high recognition rate, has simple principle, less calculation amount, higher accuracy and stronger robustness, has high running speed and is suitable for real-time detection.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.

Claims (6)

1. A computer vision target detection algorithm for detecting a target object in a target image, comprising the steps of:
s1, performing graying processing on both a source image only containing the target object and the target image to be detected to generate a grayscale source image and a grayscale target image;
s2, extracting edge information of the gray level source image to generate an edge source image;
s3, extracting edge information of the gray target image to generate an edge target image;
s4, suppressing the noise of the edge target image, and generating a filtering edge target image;
s5, extracting the feature points of the edge source image and the filtering edge target image, and matching the feature points to obtain matching points;
s6, eliminating error points of the matching points of the edge source image and the filtering edge target image to obtain accurate matching points and the position of the target object in the target image.
2. The computer vision objective detection algorithm of claim 1, wherein step S2 is implemented based on a Canny edge detection operator, comprising the steps of:
s21, smoothing the gray level source image based on a Gaussian filter to generate a smoothed gray level source image;
s22, calculating the image gradient amplitude and direction of the smooth gray level source image based on finite difference operation of first-order partial derivatives;
s23, performing non-maximum suppression on the image gradient amplitude of the smooth gray level source image based on a non-maximum suppression algorithm, calculating the gradient amplitude of the edge, reserving the local maximum gradient value and suppressing other gradient values, thereby removing non-edge points and refining the edges with a plurality of pixel widths into an edge with one pixel width;
and S24, carrying out image edge detection and connection on the smooth gray level source image based on a double-threshold algorithm and a hysteresis boundary tracking algorithm, and converting the smooth gray level source image into an edge source image.
3. The computer vision target detection algorithm of claim 1, wherein step S3 is implemented based on Roberts edge detection operator, and the edge line is detected by using a 2-pixel x 2-pixel template and using the difference between two diagonally adjacent pixels to approximate the gradient magnitude, and the calculation formula is:
Figure FDA0003221766910000021
wherein x and y are pixel coordinate values, s (x and y) is an unprocessed image pixel, and g (x and y) is a processed image pixel.
4. The computer vision target detection algorithm of claim 1, wherein step S4 is implemented based on a median filtering algorithm, and the calculation method is as follows: replacing the gray value of a certain pixel point of the edge target image by the median of the gray values of all pixel points in a neighborhood of 5 pixels multiplied by 5 pixels of the pixel point, so that the gray value of the pixel point is close to the true value; and traversing and calculating each pixel point in the edge target image, thereby eliminating isolated noise points.
5. The computer vision object detection algorithm of claim 1, wherein step S5 is implemented based on SURF algorithm, comprising the steps of:
s51, respectively taking the edge source image and the filtered edge target image as input images, and constructing respective Hessian matrixes;
s52, respectively taking the edge source image and the filtering edge target image as input images to construct respective scale spaces;
s53, respectively taking the edge source image and the filtering edge target image as input images, and determining respective feature points;
s54, respectively taking the edge source image and the filtering edge target image as input images, and determining the main directions of the respective feature points;
s55, respectively taking the edge source image and the filtered edge target image as input images to generate respective feature point descriptors;
and S56, taking the edge source image and the filtered edge target image as input images, and determining the matching points of the two images through the judgment of Euclidean distance and Hessian matrix trace.
6. The computer vision target detection algorithm of claim 1, wherein step S6 is implemented based on RANSAC algorithm, comprising the steps of:
s61, initializing, setting the iteration number d to 0, the upper limit k to 0, the optimal set of non-error points to U _ best, the temporary set of non-error points to U, and the initial values of U _ best and U to null;
s62, randomly selecting four groups of data from the matching points, and establishing a data model M;
s63, calculating projection errors of all matching points and M, and if the errors are smaller than a threshold value t, adding the matching points into an inner point set U;
s64, comparing the number of the matching points in U and U _ best, if U is more than the number of the matching points in U _ best, updating U _ best to U, and updating the value of the upper limit k of the iteration times;
s65, comparing the iteration number d with the upper limit k of the iteration number,
if d is less than or equal to k, the iteration number d is d +1, and the step returns to S61;
otherwise, jumping out of iteration;
s66, U _ best is a set of accurate matching points after the error points are removed, the edge source image in the U _ best and the accurate matching points of the filtering edge target image are connected correspondingly, so that the position of the target object in the filtering edge target image is obtained, and the position of the target object in the target image is further obtained.
CN202110959705.6A 2021-08-20 2021-08-20 Computer vision target detection algorithm Pending CN113673515A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110959705.6A CN113673515A (en) 2021-08-20 2021-08-20 Computer vision target detection algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110959705.6A CN113673515A (en) 2021-08-20 2021-08-20 Computer vision target detection algorithm

Publications (1)

Publication Number Publication Date
CN113673515A true CN113673515A (en) 2021-11-19

Family

ID=78544293

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110959705.6A Pending CN113673515A (en) 2021-08-20 2021-08-20 Computer vision target detection algorithm

Country Status (1)

Country Link
CN (1) CN113673515A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152167A (en) * 2022-12-13 2023-05-23 珠海视熙科技有限公司 Sliding detection method, device, medium and equipment
CN116612441A (en) * 2023-07-21 2023-08-18 山东科技大学 Drilling anti-seizing method, equipment and medium based on mine powder discharge image identification

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116152167A (en) * 2022-12-13 2023-05-23 珠海视熙科技有限公司 Sliding detection method, device, medium and equipment
CN116152167B (en) * 2022-12-13 2024-04-05 珠海视熙科技有限公司 Sliding detection method, device, medium and equipment
CN116612441A (en) * 2023-07-21 2023-08-18 山东科技大学 Drilling anti-seizing method, equipment and medium based on mine powder discharge image identification
CN116612441B (en) * 2023-07-21 2023-09-22 山东科技大学 Drilling anti-seizing method, equipment and medium based on mine powder discharge image identification

Similar Documents

Publication Publication Date Title
CN108805023B (en) Image detection method, device, computer equipment and storage medium
CN105354865B (en) The automatic cloud detection method of optic of multispectral remote sensing satellite image and system
CN111008961B (en) Transmission line equipment defect detection method and system, equipment and medium thereof
CN107945111B (en) Image stitching method based on SURF (speeded up robust features) feature extraction and CS-LBP (local binary Pattern) descriptor
CN106778659B (en) License plate recognition method and device
CN113076802B (en) Transformer substation switch on-off state image identification method based on lack of disconnected image sample
CN110135438B (en) Improved SURF algorithm based on gradient amplitude precomputation
TW202239281A (en) Electronic substrate defect detection
CN111640157A (en) Checkerboard corner detection method based on neural network and application thereof
CN113673515A (en) Computer vision target detection algorithm
CN111738320B (en) Shielded workpiece identification method based on template matching
CN109559273B (en) Quick splicing method for vehicle bottom images
CN108960280B (en) Picture similarity detection method and system
CN104899888A (en) Legemdre moment-based image subpixel edge detection method
CN111861866A (en) Panoramic reconstruction method for substation equipment inspection image
CN113609984A (en) Pointer instrument reading identification method and device and electronic equipment
CN103336964B (en) SIFT image matching method based on module value difference mirror image invariant property
CN109711420B (en) Multi-affine target detection and identification method based on human visual attention mechanism
CN114119437A (en) GMS-based image stitching method for improving moving object distortion
CN106778822B (en) Image straight line detection method based on funnel transformation
CN111553927B (en) Checkerboard corner detection method, detection system, computer device and storage medium
CN113408519A (en) Method and system for reading pointer instrument based on template rotation matching
Schmidt et al. Comparative assessment of point feature detectors in the context of robot navigation
CN115035168B (en) Multi-constraint-based photovoltaic panel multi-source image registration method, device and system
CN115908967A (en) Petrochemical device pipeline data sample balancing method based on cyclic generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination