CN116682103A - Photoelectric azimuth instrument compass scale recognition method based on computer vision - Google Patents

Photoelectric azimuth instrument compass scale recognition method based on computer vision Download PDF

Info

Publication number
CN116682103A
CN116682103A CN202310695855.XA CN202310695855A CN116682103A CN 116682103 A CN116682103 A CN 116682103A CN 202310695855 A CN202310695855 A CN 202310695855A CN 116682103 A CN116682103 A CN 116682103A
Authority
CN
China
Prior art keywords
image
photoelectric
area
polar
pointer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310695855.XA
Other languages
Chinese (zh)
Inventor
章琦
程虎
朱鸿泰
位门
张俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 58 Research Institute
Original Assignee
CETC 58 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 58 Research Institute filed Critical CETC 58 Research Institute
Priority to CN202310695855.XA priority Critical patent/CN116682103A/en
Publication of CN116682103A publication Critical patent/CN116682103A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Landscapes

  • Character Input (AREA)

Abstract

The invention discloses a method for identifying scales of a compass of a photoelectric azimuth meter based on computer vision, and belongs to the fields of artificial intelligence technology and optical character identification. According to the invention, through the traditional image processing algorithm and the deep learning algorithm, the rapid positioning and accurate recognition of the characters of the photoelectric azimuth instrument panel are realized; in addition, all characters with indefinite lengths are identified at one time through the CRNN algorithm, and the identification accuracy is effectively improved and the influence of noise is restrained through training data of a large number of digits. In practical application, the meaning of automatic identification of the compass image scale is mainly represented in the aspect of monitoring enemy conditions, the enemy condition target azimuth can be rapidly and accurately measured, and important data support is provided for subsequent combat decision.

Description

Photoelectric azimuth instrument compass scale recognition method based on computer vision
Technical Field
The invention relates to the technical field of artificial intelligence technology and optical character recognition, in particular to a method for recognizing scales of a compass of a photoelectric azimuth meter based on computer vision.
Background
The photoelectric azimuth instrument is an angle measuring device based on optical and electronic technology, and can measure the azimuth information of a target and output an angle value. The device is mainly applied to the military field, such as unmanned aerial vehicles, ships and the like.
The compass equipped by the existing ship is often a dial plate of a traditional machine, 360-degree scales encircle, the compass rotates to a corresponding angle according to the sailing ship head position, the specific reading of the compass is mainly finished manually by a ship operator, the operation is inconvenient, and the azimuth result cannot be obtained rapidly.
Compass image scale identification is a common method, and the azimuth information of a target can be automatically acquired by calculating the deflection angle of the pointer position in the compass image. In addition, the device can also realize night low-illumination target detection and tracking.
Disclosure of Invention
The invention aims to provide a method for identifying the compass scales of a photoelectric azimuth meter based on computer vision, which aims to solve the problems in the background technology.
In order to solve the technical problems, the invention provides a method for identifying the compass scales of a photoelectric azimuth meter based on computer vision, which comprises the following steps:
collecting and shooting partial images near the pointer of the dial by using a short-focus optical lens and a camera, aligning the center of a field of view with the pointer, and rotating the dial for 360 degrees while the pointer is motionless;
aiming at images under different ambient light conditions, evaluating average brightness and variance information of the images, determining whether illumination of the current image is proper or not, and finishing preprocessing of the images;
for numbers and scales which encircle the circular dial area, performing polar coordinate transformation on the whole dial area, and reconverting the arc character area into horizontal arrangement;
according to the illumination result of the pretreatment on the image evaluation, performing binarization segmentation on the image by using different thresholds, and roughly segmenting dial character areas;
denoising and connecting the character binary images after rough segmentation by using a morphological expansion and corrosion algorithm, removing dust around the characters and interference of upper and lower arc line segments, and simultaneously connecting a plurality of numbers;
the method comprises the steps of obtaining a connected area of a character area after denoising and connection, sequencing a plurality of connected areas according to the size sequence of pixel areas, obtaining a minimum circumscribed rectangle of the area maximum area, and dividing a character image from an original image according to coordinate information;
scaling the segmented text image to 32 pixels in equal proportion, and filling fixed pixel values to 160 pixels in image width on the right side; inputting the digital data into a character recognition network CRNN based on deep learning, extracting features through a convolutional neural network, predicting the probability of each number through the convolutional neural network, and obtaining the number reading of the dial scale near the pointer;
performing preliminary rule check on the recognized reading result, wherein the rule check comprises whether the reading result is full-digital, whether the number of digits is less than 3 and whether the number range is consistent; if all the verification results are met, the current identification result is considered to be correct; if any one of the verification is not satisfied, selecting a second large area for re-identification;
the correct character recognition result is taken, the included angle between the minimum circumscribed rectangle center of the corresponding character area of the correct result and the pointer center is calculated, and the accurate scale reading corresponding to the current pointer is converted;
comprehensively judging by using a multi-frame identification result, and reporting to a control end according to the frequency of once per second; the angle rotation of the photoelectric azimuth instrument panel is a slow and continuous process, and if the difference value of the identification results between two adjacent frames in one second is larger than 5, the current identification is considered to be wrong, and the last second result output is maintained; otherwise, updating the result normally.
In one embodiment, the preprocessing of the completed image includes:
converting an input color picture into a gray image, traversing each pixel of the gray image, taking the gray value of each pixel point as an abscissa, taking the frequency of the gray value appearing on the whole picture as an ordinate, and calculating a gray histogram in 256 pixel points;
calculating the average value and standard deviation of the gray level histogram, and determining whether the gray level histogram is too dark or too bright according to the ratio of the average value to the standard deviation; if the picture is too bright, setting a threshold according to the average value and returning to 1; if the picture is too dark, setting a threshold value according to the average value and returning to-1; if the picture is too dark and too light, a return of 0 is made indicating normal brightness.
In one embodiment, the reconversion of the arcuate text regions to a horizontal arrangement is accomplished by:
selecting the center of the dial as a transformation center point, and calculating the polar diameter and the polar angle of each pixel point relative to the transformation center; the polar diameter is the distance from the pixel point to the transformation center, and the polar angle is the included angle between the connecting line of the pixel point and the transformation center and the reference axis;
converting the polar coordinates into Cartesian coordinates, and converting the polar coordinates into coordinate values under the Cartesian coordinate system by calculating the polar diameter and the polar angle of each pixel point under the polar coordinate system;
since the pixel points of the original image do not necessarily fall on the integer coordinates, interpolation processing is required for the converted cartesian coordinates in order to obtain the final polar coordinate image.
In one embodiment, denoising and concatenating the coarsely segmented text binary image using a morphological dilation and erosion based algorithm includes:
performing corrosion operation on the thresholded binary image to remove the influence of noise points nearby the characters and fine arcs above and below the characters;
performing expansion operation on the image obtained in the previous step, so that a plurality of numbers are connected together to form a whole;
and (3) performing etching operation on the image in the previous step, and shrinking the excessively expanded area in the left-right and up-down directions so as to enable the area to be just surrounded around the text.
In one embodiment, the word recognition network CRNN is a deep learning algorithm for text recognition, and is composed of a convolutional neural network CNN and a cyclic neural network RNN, and mainly includes three steps: convolution feature extraction, sequence modeling and transcription output, and automatically converting characters in an image into readable texts;
the convolutional neural network CNN realizes the extraction of spatial features in the image by sliding a convolutional kernel on the image and performing convolutional operation on each position, so that the convolutional neural network CNN can capture the local features of the image;
the convolutional neural network CNN inputs the convolutional features of the convolutional neural network CNN into the LSTM so as to model the sequence by using the LSTM; wherein LSTM is a special RNN with memory unit and gate control mechanism capable of processing variable length sequence data; in the character recognition network CRNN, the LSTM is responsible for carrying out time sequence modeling on the characteristics extracted by the convolutional neural network CNN, and updating the state of the characteristics according to the output of the previous time slice;
the character recognition network CRNN uses a fully connected layer to map features in the sequence to text output.
In one embodiment, the method for obtaining the current pointer corresponding scale reading is as follows: when the recognition result is correct, the center coordinate of the minimum circumscribed rectangle of the corresponding text region is obtained, and the center coordinate (x 1 ,y 1 ) With the center coordinates (x 2 ,y 1 ) Included angle between r is the radius of the dial;
and obtaining the angle which exactly corresponds to the actual scale, and adding or subtracting theta according to the result of text box recognition according to whether the coordinates of the text area are positioned at the left side or the right side of the pointer, so as to obtain the final pointer reading result.
The method for identifying the compass scales of the photoelectric azimuth instrument based on computer vision has the following beneficial effects:
(1) Compared with the traditional manual reading scheme, the invention can monitor all azimuth indexes of the photoelectric azimuth instrument in real time;
(2) Through the traditional image processing algorithm and the deep learning algorithm, the rapid positioning and accurate recognition of the characters of the photoelectric azimuth instrument panel are realized, and compared with the traditional optical character scheme for dividing single characters and sequentially recognizing, the invention can recognize all characters with indefinite length at one time through the CRNN algorithm, and can effectively improve the recognition precision and inhibit the influence of noise through training data of a large number of numbers;
(3) In practical application, the meaning of automatic identification of the compass image scale is mainly represented in the aspect of monitoring enemy conditions, the enemy condition target azimuth can be rapidly and accurately measured, and important data support is provided for subsequent combat decision.
Drawings
Fig. 1 is a flow chart of a method for identifying the compass scales of a photoelectric azimuth meter based on computer vision.
Fig. 2 is a schematic diagram of the structure of the character recognition network CRNN.
Detailed Description
The invention provides a method for identifying the compass scales of a photoelectric azimuth instrument based on computer vision, which is further described in detail below with reference to the accompanying drawings and specific embodiments. The advantages and features of the present invention will become more apparent from the following description. It should be noted that the drawings are in a very simplified form and are all to a non-precise scale, merely for convenience and clarity in aiding in the description of embodiments of the invention.
The invention provides a method for identifying the compass scale of an optoelectronic azimuth instrument based on computer vision, which uses traditional image processing and combines a deep learning technology to automatically identify and report the reading of a mechanical dial plate of the optoelectronic azimuth instrument, and the specific step flow is shown in figure 1 and comprises the following steps:
step S1: and collecting and shooting partial images near the pointer of the dial by using a short-focus optical lens and a camera, aligning the center of a field of view with the pointer, and rotating the dial for 360 degrees without moving the pointer.
Step S2: for images under different ambient light conditions, the average brightness and variance information of the images are evaluated, whether the illumination of the current image is proper or not is determined, and the images are classified into 3 types: the light intensity is too weak, the illumination is proper, the illumination is too strong, and the pretreatment of the image is completed.
The step S2 specifically includes: (1) converting an input color picture into a gray scale image. (2) Each pixel of the gray image is traversed, and a gray histogram is calculated in 256 pixel points with the gray value of each pixel point as the abscissa and the frequency of occurrence of the gray value on the entire map as the ordinate. (3) The average value and standard deviation of the gray level histogram are calculated, and whether the gray level histogram is too dark or too bright is determined by the ratio of the average value and the standard deviation. (4) If the picture is too bright, setting a threshold according to the average value and returning to 1; if the picture is too dark, setting a threshold value according to the average value and returning to-1; if the picture is too dark and too light, a return of 0 is made indicating normal brightness.
Step S3: the numbers and the scales on the dial are arranged around the circular dial area, and the numbers are distributed in an arc shape and are not horizontal, so that the identification accuracy can be affected if the numbers and the scales are directly identified. Therefore, polar coordinate transformation is needed to be carried out on the whole dial area, and the arc-shaped character area is transformed into horizontal arrangement again.
The polar coordinate transformation of the whole dial area is carried out, and the polar coordinate transformation of the image is realized by converting an image from a rectangular coordinate system to a polar coordinate system through the following steps:
(1) The transformation center point is determined, and the center of the dial is generally selected as the transformation center point. (2) The polar diameter and polar angle are calculated, and for each pixel, the polar diameter and polar angle with respect to the transformation center are calculated. The polar diameter is the distance from the pixel point to the transformation center, and the polar angle is the included angle between the connecting line of the pixel point and the transformation center and the reference axis. (3) The polar coordinates are converted into Cartesian coordinates, and the polar diameter and the polar angle of each pixel point in the polar coordinate system can be converted into coordinate values in the Cartesian coordinate system by calculating the polar diameter and the polar angle. (4) Since the pixel points of the original image do not necessarily fall on the integer coordinates, interpolation processing is required for the converted cartesian coordinates in order to obtain the final polar coordinate image. Through the algorithm and the steps, the dial image can be converted from a Cartesian coordinate system to a polar coordinate system, and the characteristics and the morphology of the dial can be better presented.
Step S4: and (2) preprocessing an illumination result of the image evaluation according to the step (S2), performing binarization segmentation on the image by using different thresholds, and roughly segmenting dial character areas.
Step S5: denoising and connecting the character binary images after rough segmentation by using a morphological expansion and corrosion algorithm, removing dust around the characters and interference of upper and lower arc line segments, and simultaneously enabling a plurality of numbers to be connected together.
Firstly, performing corrosion operation on a thresholded binary image by using a rectangular structural element filter with the width of 7 and the height of 3, and removing the influence of noise points nearby characters and fine arcs above and below; then using a rectangular structural element filter with the width of 23 and the height of 3 to expand the image after the previous step, so that a plurality of numbers can be connected together to form a whole; finally, the rectangular structural element with the width of 15 and the height of 9 is used for corroding the image in the last step, and the area which is too much expanded is contracted in the left-right direction and the up-down direction, so that the area can be just surrounded around the periphery of the characters.
Step S6: and (3) obtaining connected areas of the denoised and connected text areas, sequencing the connected areas according to the size sequence of the pixel areas, obtaining the minimum circumscribed rectangle of the area maximum area, and dividing the text image from the original image according to the coordinate information.
Step S7: scaling the segmented text image to 32 pixels in equal proportion, and filling fixed pixel values to 160 pixels in image width on the right side; and inputting the numerical values into a character recognition network CRNN based on deep learning, extracting features through a convolutional neural network, predicting the probability of each number through the convolutional neural network, and obtaining the numerical reading of the dial scale near the pointer.
In recent years, deep learning is greatly and wonderfully played in various fields, and the imagination and the delightful longing of people on artificial intelligence are continuously refreshed. As shown in fig. 2, which is a schematic diagram of the structure of the word recognition network CRNN, CRNN (Convolutional Recurrent Neural Network) is a deep learning algorithm for text recognition. CRNN is composed of CNN (convolutional neural network) and RNN (recurrent neural network), and mainly includes three steps: convolution feature extraction, sequence modeling and transcription output, and automatically converts words in an image into readable text.
First, CNN is used to extract spatial features in an image. CNNs achieve this by sliding a convolution kernel over the image and performing a convolution operation on each location. This enables the CNN to capture local features of the image. Next, the CNN will input its convolution characteristics into the LSTM to model the sequence with the LSTM. LSTM is a special RNN with memory units and gating mechanisms that can handle variable length sequences of data. These gating mechanisms enable LSTM to better capture long-term dependencies. In CRNN, LSTM is responsible for timing modeling of CNN extracted features and updating its state based on the output of the previous time slice. Finally, the CRNN uses a full connection layer to map features in the sequence to text output.
CRNN works well in text recognition because it can take advantage of local feature extraction of CNNs and long-term dependency capture of LSTM to solve recognition problems. CRNN is an end-to-end text recognition algorithm, which can directly learn recognition tasks from original data without manually designing features; can be easily applied to various OCR (optical character recognition) scenes, and has been widely used in practical applications.
Step S8: performing preliminary rule check on the recognized reading result, wherein the rule check comprises whether the reading result is full-digital, whether the number of digits is less than 3 and whether the number range is consistent; if all the verification results are met, the current identification result is considered to be correct; if either of the checks is not satisfied, repeating the steps S6 and S7, and selecting the second largest area for recognition again.
Step S9: and taking the correct character recognition result of the previous step, calculating the included angle between the minimum circumscribed rectangle center of the corresponding character area of the correct result and the pointer center, and converting the accurate scale reading corresponding to the current pointer.
The scale reading corresponding to the current pointer is obtained, and the specific process is as follows: obtaining the center coordinate of the minimum circumscribed rectangle of the corresponding text area of the correct result, and calculating the center coordinate x of the current circumscribed rectangle 1 ,y 1 ) With the center coordinates (x 2 ,y 1 ) Included angle betweenr is the dial radius. And obtaining the angle which exactly corresponds to the actual scale, and adding or subtracting theta according to the result of text box recognition according to whether the coordinates of the text area are positioned at the left side or the right side of the pointer, so as to obtain the final pointer reading result.
Step S10: and comprehensively judging by using a multi-frame identification result, and reporting to a control terminal according to the frequency of once per second. The angle rotation of the photoelectric azimuth instrument panel is a slow and continuous process, and if the difference value of the identification results between two adjacent frames in one second is larger than 5, the current identification is considered to be wrong, and the last second result output is maintained; otherwise, updating the result normally.
The above description is only illustrative of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention, and any alterations and modifications made by those skilled in the art based on the above disclosure shall fall within the scope of the appended claims.

Claims (6)

1. A method for identifying the compass scale of a photoelectric azimuth instrument based on computer vision is characterized by comprising the following steps:
collecting and shooting partial images near the pointer of the dial by using a short-focus optical lens and a camera, aligning the center of a field of view with the pointer, and rotating the dial for 360 degrees while the pointer is motionless;
aiming at images under different ambient light conditions, evaluating average brightness and variance information of the images, determining whether illumination of the current image is proper or not, and finishing preprocessing of the images;
for numbers and scales which encircle the circular dial area, performing polar coordinate transformation on the whole dial area, and reconverting the arc character area into horizontal arrangement;
according to the illumination result of the pretreatment on the image evaluation, performing binarization segmentation on the image by using different thresholds, and roughly segmenting dial character areas;
denoising and connecting the character binary images after rough segmentation by using a morphological expansion and corrosion algorithm, removing dust around the characters and interference of upper and lower arc line segments, and simultaneously connecting a plurality of numbers;
the method comprises the steps of obtaining a connected area of a character area after denoising and connection, sequencing a plurality of connected areas according to the size sequence of pixel areas, obtaining a minimum circumscribed rectangle of the area maximum area, and dividing a character image from an original image according to coordinate information;
scaling the segmented text image to 32 pixels in equal proportion, and filling fixed pixel values to 160 pixels in image width on the right side; inputting the digital data into a character recognition network CRNN based on deep learning, extracting features through a convolutional neural network, predicting the probability of each number through the convolutional neural network, and obtaining the number reading of the dial scale near the pointer;
performing preliminary rule check on the recognized reading result, wherein the rule check comprises whether the reading result is full-digital, whether the number of digits is less than 3 and whether the number range is consistent; if all the verification results are met, the current identification result is considered to be correct; if any one of the verification is not satisfied, selecting a second large area for re-identification;
the correct character recognition result is taken, the included angle between the minimum circumscribed rectangle center of the corresponding character area of the correct result and the pointer center is calculated, and the accurate scale reading corresponding to the current pointer is converted;
comprehensively judging by using a multi-frame identification result, and reporting to a control end according to the frequency of once per second; the angle rotation of the photoelectric azimuth instrument panel is a slow and continuous process, and if the difference value of the identification results between two adjacent frames in one second is larger than 5, the current identification is considered to be wrong, and the last second result output is maintained; otherwise, updating the result normally.
2. The method for identifying the compass scale of the photoelectric azimuth meter based on computer vision according to claim 1, wherein the preprocessing of the finished image comprises:
converting an input color picture into a gray image, traversing each pixel of the gray image, taking the gray value of each pixel point as an abscissa, taking the frequency of the gray value appearing on the whole picture as an ordinate, and calculating a gray histogram in 256 pixel points;
calculating the average value and standard deviation of the gray level histogram, and determining whether the gray level histogram is too dark or too bright according to the ratio of the average value to the standard deviation; if the picture is too bright, setting a threshold according to the average value and returning to 1; if the picture is too dark, setting a threshold value according to the average value and returning to-1; if the picture is too dark and too light, a return of 0 is made indicating normal brightness.
3. The method for identifying the compass scales of the photoelectric azimuth meter based on computer vision according to claim 1, wherein the arc-shaped character areas are reconverted into horizontal arrangement by the following steps:
selecting the center of the dial as a transformation center point, and calculating the polar diameter and the polar angle of each pixel point relative to the transformation center; the polar diameter is the distance from the pixel point to the transformation center, and the polar angle is the included angle between the connecting line of the pixel point and the transformation center and the reference axis;
converting the polar coordinates into Cartesian coordinates, and converting the polar coordinates into coordinate values under the Cartesian coordinate system by calculating the polar diameter and the polar angle of each pixel point under the polar coordinate system;
since the pixel points of the original image do not necessarily fall on the integer coordinates, interpolation processing is required for the converted cartesian coordinates in order to obtain the final polar coordinate image.
4. The method for identifying the compass scale of the photoelectric azimuth meter based on computer vision according to claim 1, wherein denoising and connecting the character binary image after rough segmentation by using a morphological dilation and erosion algorithm comprises:
performing corrosion operation on the thresholded binary image to remove the influence of noise points nearby the characters and fine arcs above and below the characters;
performing expansion operation on the image obtained in the previous step, so that a plurality of numbers are connected together to form a whole;
and (3) performing etching operation on the image in the previous step, and shrinking the excessively expanded area in the left-right and up-down directions so as to enable the area to be just surrounded around the text.
5. The method for recognizing the compass scale of the photoelectric azimuth meter based on computer vision according to claim 1, wherein the word recognition network CRNN is a deep learning algorithm for text recognition, and is composed of a convolutional neural network CNN and a cyclic neural network RNN, and mainly comprises three steps: convolution feature extraction, sequence modeling and transcription output, and automatically converting characters in an image into readable texts;
the convolutional neural network CNN realizes the extraction of spatial features in the image by sliding a convolutional kernel on the image and performing convolutional operation on each position, so that the convolutional neural network CNN can capture the local features of the image;
the convolutional neural network CNN inputs the convolutional features of the convolutional neural network CNN into the LSTM so as to model the sequence by using the LSTM; wherein LSTM is a special RNN with memory unit and gate control mechanism capable of processing variable length sequence data; in the character recognition network CRNN, the LSTM is responsible for carrying out time sequence modeling on the characteristics extracted by the convolutional neural network CNN, and updating the state of the characteristics according to the output of the previous time slice;
the character recognition network CRNN uses a fully connected layer to map features in the sequence to text output.
6. As claimed inThe method for identifying the scales of the compass of the photoelectric azimuth instrument based on computer vision according to claim 1 is characterized in that the method for obtaining the scale readings corresponding to the current pointer is as follows: when the recognition result is correct, the center coordinate of the minimum circumscribed rectangle of the corresponding text region is obtained, and the center coordinate (x 1 ,y 1 ) With the center coordinates (x 2 ,y 1 ) Included angle between r is the radius of the dial;
and obtaining the angle which exactly corresponds to the actual scale, and adding or subtracting theta according to the result of text box recognition according to whether the coordinates of the text area are positioned at the left side or the right side of the pointer, so as to obtain the final pointer reading result.
CN202310695855.XA 2023-06-13 2023-06-13 Photoelectric azimuth instrument compass scale recognition method based on computer vision Pending CN116682103A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310695855.XA CN116682103A (en) 2023-06-13 2023-06-13 Photoelectric azimuth instrument compass scale recognition method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310695855.XA CN116682103A (en) 2023-06-13 2023-06-13 Photoelectric azimuth instrument compass scale recognition method based on computer vision

Publications (1)

Publication Number Publication Date
CN116682103A true CN116682103A (en) 2023-09-01

Family

ID=87788795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310695855.XA Pending CN116682103A (en) 2023-06-13 2023-06-13 Photoelectric azimuth instrument compass scale recognition method based on computer vision

Country Status (1)

Country Link
CN (1) CN116682103A (en)

Similar Documents

Publication Publication Date Title
CN110543878A (en) pointer instrument reading identification method based on neural network
CN112365462B (en) Image-based change detection method
CN114549981A (en) Intelligent inspection pointer type instrument recognition and reading method based on deep learning
Zhang et al. License plate localization in unconstrained scenes using a two-stage CNN-RNN
CN113591967A (en) Image processing method, device and equipment and computer storage medium
CN112634368A (en) Method and device for generating space and OR graph model of scene target and electronic equipment
CN114782770A (en) License plate detection and recognition method and system based on deep learning
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN114241469A (en) Information identification method and device for electricity meter rotation process
CN112907626A (en) Moving object extraction method based on satellite time-exceeding phase data multi-source information
CN113705564B (en) Pointer type instrument identification reading method
CN113591592B (en) Overwater target identification method and device, terminal equipment and storage medium
CN113989604A (en) Tire DOT information identification method based on end-to-end deep learning
CN110334703B (en) Ship detection and identification method in day and night image
CN112529003A (en) Instrument panel digital identification method based on fast-RCNN
CN116385477A (en) Tower image registration method based on image segmentation
CN107330436B (en) Scale criterion-based panoramic image SIFT optimization method
CN116682103A (en) Photoelectric azimuth instrument compass scale recognition method based on computer vision
Zhang et al. Reading various types of pointer meters under extreme motion blur
CN109784257A (en) A kind of detection of transformer thermometer and recognition methods
CN114927236A (en) Detection method and system for multiple target images
CN114241194A (en) Instrument identification and reading method based on lightweight network
Liu et al. Infrared dim small target detection based on regional refinement network
Wang et al. A Pointer Instrument Reading Approach Based On Mask R-CNN Key Points Detection
Liu et al. Automatic detection and recognition method of digital instrument representation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination