CN115187849A - Self-adaptive camera offset identification method - Google Patents

Self-adaptive camera offset identification method Download PDF

Info

Publication number
CN115187849A
CN115187849A CN202210808555.3A CN202210808555A CN115187849A CN 115187849 A CN115187849 A CN 115187849A CN 202210808555 A CN202210808555 A CN 202210808555A CN 115187849 A CN115187849 A CN 115187849A
Authority
CN
China
Prior art keywords
intersected
camera
period
offset
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210808555.3A
Other languages
Chinese (zh)
Inventor
陈松
田文龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoke Xinzhi Chongqing Technology Co ltd
Chongqing Beimeting Technology Co ltd
Original Assignee
Guoke Xinzhi Chongqing Technology Co ltd
Chongqing Beimeting Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoke Xinzhi Chongqing Technology Co ltd, Chongqing Beimeting Technology Co ltd filed Critical Guoke Xinzhi Chongqing Technology Co ltd
Priority to CN202210808555.3A priority Critical patent/CN115187849A/en
Publication of CN115187849A publication Critical patent/CN115187849A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/48Extraction of image or video features by mapping characteristic values of the pattern into a parameter space, e.g. Hough transformation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/757Matching configurations of points or features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a self-adaptive camera offset identification method, which belongs to the field of image identification and comprises the following steps: s1: denoising by adopting a Gaussian filter; s2: carrying out equalization on the gray values of the pixel points to obtain an average gray image; s3: extracting linear features by using a Hough transform method; s4, solving the intersecting linear characteristics; s5: carrying out target detection to obtain an intersected characteristic region; s6: carrying out distribution statistics to obtain respective distribution parameters; s7: checking the distribution using a cumulative distribution method; s8: judging whether the intersected characteristic regions contain intersected linear characteristics or not; s9: calculating the characteristics of the intersected straight lines to obtain the slope and the intercept, and determining whether the camera deviates and the corresponding deviation amount by judging the relation between the slope and the intercept so as to carry out early warning on the deviation of the camera. The invention can reduce the influence of the changing environment on the identification and realize the accurate detection of the camera offset in the complex changing environment.

Description

Self-adaptive camera offset identification method
Technical Field
The invention relates to a self-adaptive camera offset identification method, belongs to the field of image identification and edge calculation, and is particularly suitable for identification of camera offset detection in a complex environment.
Background
The video monitoring system is widely applied to various fields of city safety, intelligent traffic, intelligent environmental protection, boundary security and the like. The monitoring system is to play due roles, firstly, the monitored scene is guaranteed to be correct, and when the monitoring lens causes the change of the shooting angle due to the external force and other effects, the system needs to find abnormality and give an alarm in time so as to remind maintenance personnel to maintain correspondingly in time when the monitored scene deviates.
The detection method of the scene shift mainly has three main categories: pixel difference method, histogram matching method, image feature point matching method or feature line matching method. However, for the pixel difference method and the histogram matching method, the offset coordinate quantity of the scene offset cannot be accurately given, and the offset coordinate quantity is very sensitive to the change of illumination.
The method for assisting in identifying the camera offset by a feature point matching method or a feature straight line matching method mainly represents that: CN201910944546.5 proposes "a scene change detection method, apparatus, device and readable storage medium", and CN202110413657.0 proposes "an offset detection method, apparatus, camera module, terminal device and storage medium". Although the influence caused by illumination change can be overcome to a certain extent, the monitoring scene with clean texture cannot be effectively monitored, and meanwhile, under the scene with more moving objects in the image, the image feature point changes violently, and the monitoring cannot be effectively monitored, so that in summary, the image feature point matching algorithm is relatively dependent on the image content, and the application universality of the image feature point matching algorithm is reduced to a certain extent.
CN202110053803.3 proposes "a scene shift detection method and system based on phase correlation", which uses a fourier transform to determine a cross power spectrum between an image and a reference image, so as to determine a camera shift degree, and can reduce complex and variable environmental influences to a certain extent, but due to the complex and variable environment, the spectrum centers of two images may shift, which causes inaccuracy of a calculation result, and especially causes a problem of recognition error due to the camera being blocked.
Therefore, a method for detecting and identifying the camera offset in a complex changing environment is urgently needed.
Disclosure of Invention
In order to solve the technical problem of detection and identification of the camera offset in the complex change environment, the invention provides a self-adaptive camera offset identification method, which comprises the steps of firstly carrying out averaging on continuous images in one period to reduce the influence of the change environment on identification; then, the idea of combining a histogram matching method and an image feature matching method is adopted, so that the cross region changing in the camera period can be more accurately identified, and the influence of environmental change on identification is further reduced; and finally, directly calculating the offset of the camera through the slope and the intercept of the characteristic straight line.
A method for recognizing self-adaptive camera offset is characterized by comprising the following steps:
s1: adopting a Gaussian filter to perform noise reduction treatment on the HSV image acquired by the camera;
s2: converting HSV images collected by a camera in an identification period into gray images, and carrying out equalization on gray values of pixel points to obtain an average gray image;
s3: extracting straight line features in the mean gray level image by using a Hough transform method to obtain a feature map in the identification period;
s4: intersecting the feature graph in the identification period with the feature graph in the initial identification period to obtain a set of intersected linear features;
s5: performing target detection on the intersected linear features by utilizing a deep learning technology, and defining a detection area to obtain an intersected feature area;
s6: extracting corresponding crossed characteristic regions in the mean gray level images of the identification period and the initial identification period, and respectively carrying out distribution statistics to obtain respective distribution parameters;
s7: checking whether the distribution of the local identification period and the initial identification period of each crossed characteristic region is the same as that of the initial identification period by using a cumulative distribution method, and deleting the crossed characteristic regions which are not the same as that of the initial identification period;
s8: if the intersected characteristic region does not contain the intersected linear characteristic, returning to the step S2, and identifying the next identification period; when the intersected characteristic regions of a plurality of continuous identification periods do not contain intersected linear characteristics, performing fault alarm; if the intersected characteristic region contains the intersected linear characteristic, the processing of the step S9 is carried out;
s9: establishing an image coordinate system, respectively solving a slope and an intercept for the intersected linear characteristics of the identification period and the initial identification period, and determining whether the camera deviates and a corresponding deviation amount by judging the relation between the slope and the intercept so as to carry out early warning on the deviation of the camera.
Further, the length of the recognition period can be adjusted according to specific needs by adopting an adaptive method. In particular, the length of the period may be determined according to the number of the intersecting feature regions calculated in step S6, so that the recognition accuracy may be improved, and the calculation amount may be controlled.
Further, in order to reduce the amount of calculation and extract obvious linear features in a targeted manner, the mean gray level image in step S3 needs to be compressed in image gray level before hough transform is performed.
Furthermore, the hough transform described in step S3 is a method for extracting straight lines and circles, and compared with other detection methods, it is better to reduce noise interference, and is advantageous for extracting contour information. The realization of Hough transform in Matlab requires the following three steps:
(1) Performing Hough transformation by adopting a hough () function to obtain a Hough matrix;
(2) Searching a peak point in the Hough matrix by adopting a hough peaks () function;
(3) And (5) obtaining contour information in the original image on the basis of the results of the previous two steps by adopting a houghlines () function.
In application, a Hough transform program can be written by Matlab, and then a Matlab file can be called by python.
Further, the step S4 specifically includes:
s401: searching and matching the feature points contained in the feature straight line in the feature graph in the identification period and the feature graph in the initial identification period;
s402: and extracting the successfully matched characteristic points in the characteristic straight lines to form a new characteristic straight line set, namely the set of the intersected straight line characteristics.
In particular, since there may be errors in the feature lines that match successfully, hamming distance may be used for filtering.
Further, the deep learning technique of step S5 is one of fast R-CNN, SSD and YOLO target recognition models.
Further, in order to further reduce the interference of the external environment change on the intersecting straight line features, the gray level image of the mean value of the current identification period and the initial identification period in the step S6 may be an HSV image of the mean value of the current identification period and the initial identification period, and three distributions corresponding to three primary colors are respectively established in each pair of intersecting feature regions.
Further, the specific process of step S6 is: the gray level or color level is used as an abscissa, the frequency is used as an ordinate, the image with the relation between the drawing frequency and the gray level or the color level is a histogram of one image, and the distribution state of the texture feature information of the feature map can be checked according to the histogram. Referable documents of gray scale among others:
liujianzhuang, chestnut wenqing two-dimensional Otsu automatic threshold segmentation method of gray level images [ J ]. Automated bulletin, 1993 (01): 101-105.
Further, the step S9 is:
s901: establishing an image coordinate system by taking the maximum monitoring angle of the camera as a boundary and the pixel point size as the minimum scale;
s902: fitting coordinates corresponding to the feature points of each pair of intersected linear features by adopting a least square method to obtain a slope and an intercept;
s903: calculating the slope difference and the intercept difference of each pair of intersected linear features;
s094: calculating the mean value of slope difference values and the mean value of intercept difference values of all intersected linear features, namely the image offset angle and the offset distance;
s095: and when the offset angle and the offset distance are larger than the set threshold value, outputting an alarm of the offset of the camera, otherwise, outputting a result that the camera does not offset.
Furthermore, in order to better eliminate interference and improve the identification precision, step S904 may use a K-means method to perform two-center clustering, respectively cluster the intersecting linear features of the identification period and the initial identification period, and directly calculate the image offset angle and the offset distance through the clustering center.
The invention has the beneficial effects that: the invention provides a self-adaptive camera offset identification method, which comprises the steps of firstly carrying out averaging through continuous images in an identification period, and reducing the influence of a changing environment on identification; then, an image straight line feature matching method is combined with distribution statistics of the intersected areas, so that the common straight line features of the images in the camera recognition period are recognized more accurately, and the influence of environmental change on recognition is further reduced; and finally, the accurate detection of the camera offset in the complex change environment is directly realized by utilizing the slope and intercept of the linear characteristic.
Drawings
FIG. 1 is a block diagram of a method for adaptive camera offset recognition;
fig. 2 is a matching graph of intersecting characteristic lines in an example.
Detailed Description
In order to make the purpose and technical solution of the present invention more clearly understood, the present invention will be described in detail with reference to the accompanying drawings and examples.
Example (b): to the camera of a certain entrance installation among the wisdom traffic system, because long-term exogenic action, probably lead to the camera condition of skew to appear, need carry out automated inspection and early warning to the camera skew now, consider that the crossing vehicle that comes and goes is many, and block up easily, can cause two moments image variation to change greatly, or cause sheltering from to the common region, this embodiment provides "a self-adaptation camera skew's identification method".
With reference to fig. 1, the method comprises the following steps:
the method comprises the following steps: pretreatment: adopting a Gaussian filter to perform noise reduction treatment on the HSV image acquired by the camera; and uniformly adjusting the brightness of the image by using the histogram equalization of the cumulative distribution function so as to ensure that the recognition effect is more accurate.
Step two: setting the length of an identification period, converting HSV images collected by a camera in the identification period into gray images, and averaging the gray values of pixel points to obtain a mean gray image.
The length of the identification period can be adjusted according to specific needs by adopting a self-adaptive method, and is the frame number of continuous images.
Step three: and (3) extracting the linear features in the mean gray image after the gray level is compressed and mapped to the integer level of [ 0-255 ] by using a Hough transform method to obtain a feature map in the identification period.
The Hough transform is a method for extracting straight lines and circles, can better reduce noise interference compared with other detection methods, and is beneficial to extracting profile information. The implementation of Hough transform in Matlab requires the following three steps:
(1) Performing Hough transformation by adopting a hough () function to obtain a Hough matrix;
(2) Searching a peak point in the Hough matrix by adopting a hough peaks () function;
(3) And (5) obtaining contour information in the original image on the basis of the results of the previous two steps by adopting a houghlines () function.
Step four: intersecting the characteristic graph in the identification period with the characteristic graph in the initial identification period to obtain a set of intersected linear characteristics. The method specifically comprises the following steps:
1) Searching and matching the feature points contained in the feature straight line in the feature graph in the identification period and the feature graph in the initial identification period;
2) And extracting the successfully matched characteristic points in the characteristic straight lines to form a new characteristic straight line set, namely the set of the intersected straight line characteristics.
Wherein, the characteristic straight line format is: characteristic straight lines = { { characteristic straight lines 1= { characteristic points 11, characteristic points 12, ·... }, { characteristic straight lines 2= { characteristic points 21, characteristic points 22, ·... },... }.
Step five: and (3) carrying out classified and positioned target detection on the intersected linear features by utilizing a YOLOv3 network, and defining a detection area by using an Anchor Box (Anchor Box), namely obtaining the intersected feature area.
Step six: extracting corresponding intersected characteristic regions in the identification period and the mean value HSV image of the initial identification period, drawing a relation image of the frequency and the color grade of the three primary colors, namely a histogram of one image, by taking the color grade of the three primary colors as an abscissa and the frequency as an ordinate, and checking the distribution state of the texture characteristic information of the characteristic image according to the histogram to respectively carry out distribution statistics to obtain respective distribution parameters.
Step seven: and checking whether the distribution of the local identification period and the initial identification period of each intersected characteristic region is the same as that of the initial identification period by using a cumulative distribution method, and deleting the intersected characteristic regions which are not the same.
Step eight: if the intersected characteristic region does not contain the intersected linear characteristic, returning to the step S2, and identifying the next identification period; when the intersected characteristic regions of the continuous 5 identification periods do not contain intersected linear characteristics, performing fault alarm; if the intersected characteristic region contains the intersected linear characteristic, performing the processing of the step S9;
step nine: establishing an image coordinate system, respectively solving the slope and the intercept of the intersected linear features of the identification period and the initial identification period, and determining whether the camera is deviated and the corresponding deviation amount by judging the relation between the slope and the intercept so as to perform early warning on the deviation of the camera. In particular, the amount of the solvent to be used,
a) Establishing an image coordinate system according to the condition that the maximum monitoring angle of the camera is used as a boundary and the size of the pixel point is the minimum scale;
b) Fitting coordinates corresponding to the characteristic points of each pair of intersected linear characteristics by adopting a least square method to obtain a slope and an intercept;
c) Calculating the slope difference and the intercept difference of each pair of intersected linear features; at this time, if the difference is abnormal, the length of the identification period or the comparison of a plurality of identification periods can be adjusted, and whether the characteristic straight line corresponding to the difference has misjudgment or not is judged;
d) Calculating the mean value of slope difference values and the mean value of intercept difference values of all the intersected linear features, namely the image offset angle and the offset distance;
e) And when the offset angle and the offset distance are larger than the set threshold value by 5 degrees, outputting an alarm of the offset of the camera, otherwise, outputting a result that the camera does not offset.
The present invention is not limited to the embodiments described above, and it will be apparent to a person skilled in the art that modifications or variations may be made to the above-described embodiments of the invention without departing from the scope of protection of the embodiments of the invention and the appended claims, which are presented by way of example only and are intended to facilitate an understanding of the invention without any limitation thereto.

Claims (8)

1. A method for recognizing self-adaptive camera offset is characterized by comprising the following steps:
s1: adopting a Gaussian filter to perform noise reduction treatment on the HSV image acquired by the camera;
s2: converting HSV images collected by a camera in an identification period into gray images, and carrying out equalization on gray values of pixel points to obtain an average gray image;
s3: extracting straight line features in the mean gray level image by using a Hough transform method to obtain a feature map in the identification period;
s4: intersecting the feature graph in the identification period with the feature graph in the initial identification period to obtain a set of intersected linear features;
s5: performing target detection on the intersected linear features by utilizing a deep learning technology, and defining a detection area to obtain an intersected feature area;
s6: extracting corresponding crossed characteristic regions in the mean gray level images of the identification period and the initial identification period, and respectively carrying out distribution statistics to obtain respective distribution parameters;
s7: checking whether the distribution of the local identification period and the initial identification period of each crossed characteristic region is the same as that of the initial identification period by using a cumulative distribution method, and deleting the crossed characteristic regions which are not the same as that of the initial identification period;
s8: if the intersected characteristic region does not contain the intersected linear characteristic, returning to the step S2 to identify the next identification period; when the intersected characteristic regions of a plurality of continuous identification periods do not contain intersected linear characteristics, performing fault alarm; if the intersected characteristic region contains the intersected linear characteristic, performing the processing of the step S9;
s9: establishing an image coordinate system, respectively solving a slope and an intercept for the intersected linear characteristics of the identification period and the initial identification period, and determining whether the camera deviates and a corresponding deviation amount by judging the relation between the slope and the intercept so as to carry out early warning on the deviation of the camera.
2. The method as claimed in claim 1, wherein the length of the recognition period can be adjusted according to specific needs by using an adaptive method.
3. The method for recognizing the self-adaptive camera offset as claimed in claim 1, wherein, in order to reduce the amount of computation and extract the obvious straight-line features in a targeted manner, the mean gray image in the step S3 needs to be subjected to image gray level compression before hough transform.
4. The method for recognizing the adaptive camera offset according to claim 1, wherein the step S4 specifically comprises:
s401: searching and matching the feature points contained in the feature straight line in the feature graph in the identification period and the feature graph in the initial identification period;
s402: and extracting the successfully matched characteristic points in the characteristic straight lines to form a new characteristic straight line set, namely the set of the intersected straight line characteristics.
5. The method as claimed in claim 1, wherein the deep learning technique in step S5 is one of fast R-CNN, SSD and YOLO target recognition models.
6. The method as claimed in claim 1, wherein, in order to further reduce the interference of external environment variation on the intersecting straight line features, the mean gray image of the current recognition period and the initial recognition period in step S6 is an HSV image of the mean of the current recognition period and the initial recognition period, and three distributions corresponding to three primary colors are respectively established for each pair of intersecting feature regions.
7. The method for recognizing the adaptive camera offset according to claim 1, wherein the step S9 is:
s901: establishing an image coordinate system by taking the maximum monitoring angle of the camera as a boundary and the pixel point size as the minimum scale;
s902: fitting coordinates corresponding to the characteristic points of each pair of intersected linear characteristics by adopting a least square method to obtain a slope and an intercept;
s903: calculating the slope difference and the intercept difference of each pair of intersected linear features;
s094: calculating the mean value of slope difference values and the mean value of intercept difference values of all the intersected linear features, namely the image offset angle and the offset distance;
s095: and when the offset angle and the offset distance are larger than the set threshold value, outputting an alarm of the offset of the camera, otherwise, outputting a result that the camera is not offset.
8. The method for recognizing the offset of the adaptive camera as claimed in claim 7, wherein in order to better eliminate interference and improve recognition accuracy, step S904 may use K-means to perform two-center clustering, respectively cluster the intersecting linear features of the recognition period and the initial recognition period, and directly calculate the image offset angle and the offset distance through the clustering center.
CN202210808555.3A 2022-06-30 2022-06-30 Self-adaptive camera offset identification method Pending CN115187849A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210808555.3A CN115187849A (en) 2022-06-30 2022-06-30 Self-adaptive camera offset identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210808555.3A CN115187849A (en) 2022-06-30 2022-06-30 Self-adaptive camera offset identification method

Publications (1)

Publication Number Publication Date
CN115187849A true CN115187849A (en) 2022-10-14

Family

ID=83518121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210808555.3A Pending CN115187849A (en) 2022-06-30 2022-06-30 Self-adaptive camera offset identification method

Country Status (1)

Country Link
CN (1) CN115187849A (en)

Similar Documents

Publication Publication Date Title
US11403839B2 (en) Commodity detection terminal, commodity detection method, system, computer device, and computer readable medium
CN108268867B (en) License plate positioning method and device
CN111896540B (en) Water quality on-line monitoring system based on block chain
CN113205063A (en) Visual identification and positioning method for defects of power transmission conductor
CN111563896B (en) Image processing method for detecting abnormality of overhead line system
CN109447062A (en) Pointer-type gauges recognition methods based on crusing robot
CN112907626A (en) Moving object extraction method based on satellite time-exceeding phase data multi-source information
CN115272852A (en) Method, equipment and medium for identifying hidden danger of power transmission line protection area
CN115018785A (en) Hoisting steel wire rope tension detection method based on visual vibration frequency identification
CN114332781A (en) Intelligent license plate recognition method and system based on deep learning
CN116110006B (en) Scenic spot tourist abnormal behavior identification method for intelligent tourism system
CN115311287B (en) Method for detecting production abnormity of common rail oil injector
CN111402185B (en) Image detection method and device
CN117037082A (en) Parking behavior recognition method and system
CN116563786A (en) TEDS jumper fault identification detection method, storage medium and equipment
CN109784257B (en) Transformer thermometer detection and identification method
CN115187849A (en) Self-adaptive camera offset identification method
CN111652048A (en) A deep learning based 1: n face comparison method
CN116612461A (en) Target detection-based pointer instrument whole-process automatic reading method
CN113487589B (en) Sub-pixel circle center detection method and system
CN114677428A (en) Power transmission line icing thickness detection method based on unmanned aerial vehicle image processing
CN114724119A (en) Lane line extraction method, lane line detection apparatus, and storage medium
CN111046876B (en) License plate character rapid recognition method and system based on texture detection technology
CN113989632A (en) Bridge detection method and device for remote sensing image, electronic equipment and storage medium
CN108830281B (en) Repeated image matching method based on local change detection and spatial weighting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination