CN115311447A - Pointer instrument indicating number identification method based on deep convolutional neural network - Google Patents

Pointer instrument indicating number identification method based on deep convolutional neural network Download PDF

Info

Publication number
CN115311447A
CN115311447A CN202210922982.4A CN202210922982A CN115311447A CN 115311447 A CN115311447 A CN 115311447A CN 202210922982 A CN202210922982 A CN 202210922982A CN 115311447 A CN115311447 A CN 115311447A
Authority
CN
China
Prior art keywords
picture
neural network
training
pointer
instrument
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210922982.4A
Other languages
Chinese (zh)
Inventor
贾鹏
王宗尧
匡海波
魏慧康
唐霄
杨彦博
刘芳名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dalian Maritime University
Original Assignee
Dalian Maritime University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dalian Maritime University filed Critical Dalian Maritime University
Priority to CN202210922982.4A priority Critical patent/CN115311447A/en
Publication of CN115311447A publication Critical patent/CN115311447A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/247Aligning, centring, orientation detection or correction of the image by affine transforms, e.g. correction due to perspective effects; Quadrilaterals, e.g. trapezoids
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a pointer instrument indicating number identification method based on a deep convolutional neural network, which comprises the steps of obtaining a picture training set and a picture testing set, inputting the picture training set into the deep convolutional neural network for training to obtain an instrument positioning neural network, analyzing and processing the training pictures to obtain optimized target pictures, wherein the optimized target pictures comprise the same image size and the same rotation center of the images; and marking the optimized target picture to obtain a final marked picture, training the final marked picture based on the deep convolutional neural network to obtain an instrument recognition neural network, and importing the test picture of the picture test set into the instrument recognition neural network for verification. The invention solves the problems that the numerical accuracy of the traditional image processing method for the identification of the pointer instrument in the complex environment is not high, and the reading mode and the image processing precision and efficiency of the traditional pointer instrument are greatly reduced when the construction environment is poor and the image quality is low.

Description

Pointer instrument indicating number identification method based on deep convolutional neural network
Technical Field
The invention relates to the field of automatic identification of reading of pointer instruments in industrial production, in particular to a pointer instrument reading identification method based on a deep convolutional neural network.
Background
In the process of drilling operation, a weight indicator (pointer instrument) is an important instrument for detecting the drilling state and ensuring the drilling safety. The weight indicator is mainly used for indicating and recording the change of the hanging weight and the bit pressure of the drilling tool in the process of petroleum, geological exploration and measurement and drilling, and helps a driller to master the change of working parameters in drilling and well repairing operations in real time and judge the working state of the drilling tool. Through the weight indicator, the working condition of the drilling tool in the underground can be known and the drilling tool can be operated correctly. Although the technology of the digital display instrument applied to data monitoring and data information transmission is relatively perfect, in the traditional industrial production such as petrochemical industry, nuclear industry and electric power field and some complex environments such as high temperature, high pressure, high cold and strong magnetic field, the pointer instrument has the advantages of simple structure, low cost, strong anti-interference capability, durability and the like, still is a main tool for data measurement and has certain irreplaceability. Therefore, the automatic identification of the indication number of the pointer instrument has important significance and great value in the fields of traditional industrial production and the like.
At present, the acquisition and the entry of data monitored by the pointer instrument still need to be monitored and observed by human eyes and manually recorded. Firstly, the data acquisition and input mode not only consumes a large amount of manpower, material resources and financial resources, but also causes the reduction of working efficiency due to large workload of workers, is easy to generate error recording and reading, reduces the accuracy of data acquisition, has large subjective influence on the data acquisition quality, delays the progress of a project and causes irrecoverable results; secondly, most instruments work under severe environmental conditions such as high temperature, high pressure, high radiation, even toxicity and the like, and if data are collected and recorded in a manual mode, the life safety of a data collector is seriously threatened. Finally, the manual data acquisition and input consumes too long time, which seriously affects the efficiency of industrial production, and also has the problems of too high cost, relatively complicated operation flow, ineffective resource utilization and the like for the update and the update of the pointer instrument.
With the development of artificial intelligence technology, image recognition technology is applied to the reading recognition of pointer instruments. The image recognition and data collection of the pointer instrument are carried out on the area of the instrument panel through the camera, the position of the pointer and the display of the number are recognized by means of the image recognition technology, and the pointer instrument panel is recognized more efficiently and accurately by means of the image recognition technology.
Over the last few years, pointer instrument identification has done much work. In general, existing pointer instrument recognition algorithms can be classified into conventional algorithms based on digital image processing technology and modern algorithms based on machine learning and deep learning. Conventional algorithms propose methods for performing pointer identification and reading an image of a pointer meter using template matching and meter lookup methods. The traditional algorithm also proposes that a pointer is extracted by using an image subtraction method, and a Hough transform algorithm is used for detecting a circular area so as to finish pointer meter reading identification. Although these algorithms work well in some cases and can achieve high reading accuracy, they are poorly adapted to the natural environment due to the extremely high lighting conditions required for image processing. In recent years, some scholars have also proposed novel modern algorithms for meter identification, such as using SVM (support vector machine) machine learning algorithms to locate and separate meters or using fast-RCNN object detection algorithms to locate and extract meters. The method solves the problems of traditional algorithm to a great extent, such as different scales, complex background, difficult instrument positioning and the like. However, none of the algorithms can solve the problems of uneven lighting, large lighting variation range and instrument tilt.
With the rapid development of high-performance computer and mobile communication information technologies, the pointer instrument automatic reading and recognition technology has become a new hotspot and sophisticated technology in the fields of machine vision and pattern recognition. Especially, in recent years, explosive development of Artificial Intelligence (AI) and deep learning algorithms make a major breakthrough, and students are attracted to solve the problems encountered in the pointer meter identification process by using a deep learning method. Some researchers have proposed an automated method to solve this problem using computer vision processing techniques, i.e. using a camera to acquire the image and then using digital image processing techniques to process the dial for reading. However, most algorithms can only be run in a specific environment or fixed location without a high degree of reliability, stability and long-term availability. The method comprises the steps of shooting videos of pointer type weight indicators by workers in an engineering team to perform number reading identification, wherein the reading capacity of the professional skill level of each worker is different, the workers are tired, the severity of the working environment is high in cold, high in radiation and toxic areas, the reading error of the number of the instrument panel is extremely large, the working efficiency of a driller is lowered, and the shutdown risk is caused. The invention mainly solves the problem that the traditional pointer instrument can not carry out accurate image area positioning and clipping under the complex environmental condition and the image background; and under the condition of serious oblique distortion of the image, the accurate positioning of the position of the pointer and the inaccurate identification of the reading of the instrument can not be realized.
Disclosure of Invention
The invention provides a pointer instrument reading identification method based on a deep convolutional neural network, which applies an image identification technology to the reading identification of a pointer instrument along with the development of an artificial intelligence technology, and solves the problems that the numerical accuracy of the traditional image processing method for the pointer instrument identification under a complex background environment is not high, and the reading mode and the image processing precision and efficiency of the traditional pointer instrument are greatly reduced when a construction site environment is poor and the image quality is low.
In order to achieve the purpose, the technical scheme of the invention is as follows:
a pointer instrument indicating number identification method based on a deep convolutional neural network comprises the following steps:
step 1: acquiring a picture training set and a picture test set of the indicating number of the pointer instrument, inputting the picture training set to a first depth convolution neural network for positioning training to obtain an instrument positioning neural network, wherein the picture training set of the indicating number of the pointer instrument comprises a training picture of the indicating number of the pointer instrument, the training picture of the indicating number of the pointer instrument is an original picture with a manually marked area, the picture test set of the indicating number of the pointer instrument comprises a test picture of the indicating number of the pointer instrument, and the test picture of the indicating number of the pointer instrument is an unmarked original picture;
step 2: analyzing and processing the training pictures of the pointer instrument readings to obtain optimized target pictures, wherein the optimized target pictures comprise the same image size and the same image rotation center;
and step 3: marking the optimized target picture to obtain a final marked picture, and inputting the final marked picture into a second deep convolutional neural network for recognition training to obtain an instrument recognition neural network;
and 4, step 4: and importing the test picture of the pointer instrument readings into an instrument recognition neural network for verification, and obtaining a pointer instrument reading recognition result.
Further, the picture training set and the picture testing set of the pointer instrument readings obtained in the step 1 are specifically the picture training set and the picture testing set
Step 1.1: the original picture is a picture shot in an actual application scene, a required characteristic information area in the original picture is marked based on a picture marking tool to obtain the training initial picture, and the required characteristic information area comprises an area where a dial plate of a pointer instrument is located;
step 1.2: inputting the training pictures in the training initial picture set into a first deep convolution neural network, wherein the first deep convolution neural network receives the whole initial training picture by adopting a CNN (convolutional neural network) model to obtain a characteristic diagram, and the characteristic data information of the characteristic diagram comprises the color, the size and the shape of a dial pointer of a pointer instrument; secondly, obtaining candidate areas by adopting a plurality of sliding windows with preset sizes on the feature map, mapping each candidate area to obtain a low latitude feature of the feature map, wherein the low latitude feature at least comprises an area where a dial plate is located, a background area where the dial plate is located and a non-feature area, and the non-feature area is other areas except the area where the dial plate is located and the background area where the dial plate is located;
step 1.3: respectively sending the low-dimensional features to a first full connection layer and a second full connection layer, wherein the first full connection layer and the second full connection layer belong to a first deep convolutional neural network, the first full connection layer is used for classifying and predicting an area where the dial plate is located and a background area where the dial plate is located and outputting a corresponding probability value, the output probability value is an output positioning result, and the second full connection layer is used for regressing the output positioning result to obtain a vertex coordinate value of a rectangular area where the dial plate is located;
step 1.4: and marking according to the vertex coordinate value of the rectangular area to obtain the training picture, and inputting the picture training set into a first deep convolution neural network for positioning training to obtain an instrument positioning neural network.
Further, the analyzing and processing the training picture in the step 2 to obtain an optimized target picture specifically includes:
step 2.1: obtaining a matrix function based on OpenCV (open circuit capacitor) by the vertex coordinate value of the training picture, wherein the matrix function is getAffine Transform (), and obtaining an affine transformation matrix according to the matrix function;
step 2.2: performing affine transformation on the training picture based on an affine transformation matrix to set a rotation center, a rotation angle and a size of an output image of the training image, wherein the affine transformation adopts a wrapAffine function;
step 2.3: and carrying out affine transformation on the rotation center and the rotation angle of the training image and the size of an output image based on the affine transformation matrix to obtain an optimized target picture.
Further, the method for obtaining the instrument recognition neural network in the step 3 specifically includes:
step 3.1: marking the pointer position and the pointer shape of the pointer instrument by the optimized target picture by using a marking tool;
step 3.2: and inputting the marked optimized target picture into a second deep convolution neural network for carrying out pointer position recognition training for a plurality of times, thereby obtaining an instrument recognition neural network.
Further, the instrument recognition neural network in the step 4 is verified, that is, the test pictures in the test set are guided into the instrument recognition neural network for testing, the instrument recognition neural network recognizes the positions of the long needle and the short needle of the instrument pointer in each test picture, the instrument positioning neural network and the instrument recognition neural network are adopted to verify the instrument pointer position picture by using the test pictures, and then the accuracy of the obtained instrument positioning neural network and the accuracy of the obtained instrument recognition neural network are verified.
The invention has the beneficial effects that:
the invention discloses a pointer instrument registration identification method based on a depth convolution neural network, which is based on the development of an artificial intelligence technology, applies an image identification technology to the registration identification of a pointer instrument based on the visual identification technology of an artificial intelligence in a computer, adopts the depth convolution neural network to position and identify an image of a dial area of the pointer instrument, and does not need to observe, read and record a dial pointer manually; the traditional pointer instrument identification method is characterized in that a Hough transformation method and an improved post-replacement transformation method are used for extracting a pointer position, and then a reading is calculated according to an angle, a first deep convolution neural network is adopted to directly carry out rough extraction and positioning on a dial plate area of a pointer instrument, opencv affine transformation is used for rotating and correcting a picture, a second deep convolution neural network is adopted to carry out fine positioning on the pointer characteristics of the dial plate of the instrument, the fine positioning is directly used for identifying the pointer position, the angle and the reading of the pointer instrument, and the traditional method is omitted for identifying characters and numbers; the position of the pointer in the dial plate area is identified, the dial plate readings are read according to the relative position of the pointer, and then the shot meter pointer position picture is used for checking, so that the accuracy of pointer type meter identification is checked, and the problems that the reading mode of the traditional pointer type meter and the precision and the efficiency of image processing are greatly reduced are solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic diagram of a pointer instrument indicating number identification method based on a deep convolutional neural network of the present invention;
FIG. 2 is a flow chart diagram of a pointer instrument indicating number identification method based on a deep convolutional neural network of the present invention;
FIG. 3 is a block diagram of a convolutional neural network of a pointer instrument indicating number identification method based on a deep convolutional neural network of the present invention;
FIG. 4 is a labeling diagram of a convolutional neural network labeled by via in the pointer instrument index identification method based on the deep convolutional neural network of the present invention;
FIG. 5 is a diagram of a meter pointer positioning area identified by using a training picture in a pointer meter reading identification method based on a deep convolutional neural network of the present invention;
FIG. 6 is a region diagram corrected by affine transformation in the pointer instrument indicating number identification method based on the deep convolutional neural network;
FIG. 7 is a labeling diagram of an area diagram after affine transformation correction in the pointer instrument indicating number identification method based on the deep convolutional neural network;
FIG. 8 is a diagram of a first meter pointer identification area identified by using a training picture in a pointer type meter reading identification method based on a deep convolutional neural network of the present invention;
FIG. 9 is a diagram of a second meter pointer identification area identified by using a training picture in the pointer type meter reading identification method based on the deep convolutional neural network of the present invention;
FIG. 10 is a diagram of a third meter pointer identification area identified by using a training picture in the pointer type meter reading identification method based on the deep convolutional neural network of the present invention;
FIG. 11 is a diagram of a fourth meter pointer identification area identified by using a training picture in the pointer type meter reading identification method based on the deep convolutional neural network.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The embodiment provides a pointer instrument indicating number identification method based on a deep convolutional neural network, as shown in fig. 1 to fig. 2, including:
step 1: acquiring a picture training set and a picture test set of the indicating number of the pointer instrument, inputting the picture training set to a first depth convolution neural network for positioning training to obtain an instrument positioning neural network, wherein the picture training set of the indicating number of the pointer instrument comprises a training picture of the indicating number of the pointer instrument, the training picture of the indicating number of the pointer instrument is an original picture with a manually marked area, the picture test set of the indicating number of the pointer instrument comprises a test picture of the indicating number of the pointer instrument, and the test picture of the indicating number of the pointer instrument is an unmarked original picture;
step 2: analyzing and processing the training pictures of the pointer instrument display to obtain an optimized target picture, wherein the optimized target picture comprises the same image size and the same image rotation center;
and 3, step 3: marking the optimized target picture to obtain a final marked picture, and inputting the final marked picture into a second deep convolution neural network for recognition training to obtain an instrument recognition neural network;
and 4, step 4: and importing the test picture of the indicating number of the pointer instrument into an instrument recognition neural network for verification, and obtaining the indicating number recognition result of the pointer instrument.
The method aims at identifying the pointer instrument panel and the pointer instrument position and uses two deep convolutional neural networks, the instrument panel position of the pointer instrument and the pointer position of the pointer instrument are identified and positioned on the basis of CNN (convolutional neural network) and RNG (radio network) to obtain a candidate frame picture and input the candidate frame picture to RPN (regional candidate network), the RPN (regional candidate network) judges the background and the foreground of an image in the input candidate frame picture and detects and corrects the image position, and the accuracy and the speed of image identification in the candidate frame picture are improved. Before the second deep convolution neural network is used for identification, affine transformation is used for rotating and correcting the image. The deep convolutional neural network is constructed by simulating a visual and perceptual system of a living being, neurons in a convolutional layer are simple cells in a Hubel-Wiesel model, neurons in a down-sampling layer simulate complex cells, and neurons on a characteristic diagram share the same convolutional kernel and correspond to the simple cells with a certain specific orientation, so that the deep convolutional neural network can learn pixels and audio frequency and has a stable effect, and no additional characteristic engineering requirements are required on data. The deep convolution neural network used in the patent is Mask R-CNN, the neural network adopts two convolution neural networks as main structures, the segmentation algorithm of the image is utilized, the Mask R-CNN is a strong and flexible universal object example segmentation frame, the Mask R-CNN can accurately represent the accurate shape and position of the dial plate, and if the dial plate is in a special shape, such as a circle or a rhombus, the Mask R-CNN can also accurately delineate the area where the dial plate is located; if the image has serious distortion, the region identified by the Mask R-CNN can realize the image restoration of the dial by using a distortion correction algorithm, the method not only can detect the targets in the image, but also can provide high-quality segmentation results for each target, and can realize various tasks such as target detection, target contour identification, target classification and the like.
The invention is based on the construction tool of TensorFlow neural network to obtain the deep convolution neural network structure, as shown in figure 3, the first/second deep convolution neural network is composed of the following parts,
the first part is a CNN (convolutional neural network) which mainly functions to perform preprocessing operations on the original picture, picture object detection, image contour recognition and extraction of feature data information. The preprocessing operation comprises the steps of carrying out image scaling on an original picture to obtain a picture with a preset size, wherein a configuration file config.py of a mask rcnn can set correlation coefficients of the image scaling, and the correlation coefficients are respectively as follows: min _ dim (shorter edge scaling length); ma _ dim (longer edge scaling length); min _ scale (minimum scale); mode (picture adjustment mode). And then inputting the preset-size picture into a pre-trained convolutional neural network to obtain a corresponding feature map (feature map), wherein the feature data information comprises the color, size, shape and the like of a dial plate pointer of the pointer instrument.
The second part is an RPN network, preselection ROI (region of interest) is set for all points in the feature map, a plurality of ROI to be selected can be obtained, the ROI to be selected is input into the RPN network to carry out binary classification (foreground or background) and BB regression, and partial ROI to be selected is filtered out to obtain position information data of features in the feature map.
The third part is a ROIAlign (matching of regions of interest) network, the characteristic diagram is subjected to quantization cancellation operation, the characteristic diagram is processed by a bilinear interpolation method to obtain image numerical values on pixel points with floating point coordinates, the whole characteristic aggregation process is converted into a continuous operation, and the accuracy of a detection model can be improved; the coordinate points on the region boundaries of the pictures within the candidate box are pooled and normalized to obtain a Fixed Size Feature Map (Fixed Size Feature Map) that can be input into a deep convolutional neural network.
In an embodiment, the picture training set and the picture testing set of the pointer instrument readings obtained in step 1 are specifically
Step 1.1: the original picture is a picture shot in an actual application scene, a required characteristic information area in the original picture is marked based on a picture marking tool VGG Image indicator (VIA) to obtain the training initial picture, and the required characteristic information area comprises an area where a dial plate of a pointer instrument is located;
step 1.2: inputting training pictures in the training initial picture set into a first deep convolutional neural network, wherein the first deep convolutional neural network receives the whole initial training picture by adopting a CNN (convolutional neural network) model to obtain a characteristic diagram, and the characteristic data information of the characteristic diagram comprises the color, the size and the shape of a dial plate pointer of a pointer instrument; then, obtaining candidate areas by adopting a plurality of 3-by-3 sliding windows on the feature map, mapping each candidate area to obtain a low latitude feature of the feature map, wherein the low latitude feature at least comprises an area where the dial plate is located, a background area where the dial plate is located and a non-feature area, and the non-feature area is other areas except the area where the dial plate is located and the background area where the dial plate is located;
step 1.3: respectively sending the low-dimensional features to a first full-connection layer and a second full-connection layer, wherein the first full-connection layer and the second full-connection layer belong to a first deep convolutional neural network, the first full-connection layer is used for classifying and predicting an area where the dial plate is located and a background area where the dial plate is located and outputting a corresponding probability value, the output probability value is an output positioning result, and the second full-connection layer is used for regressing the output positioning result to obtain a vertex coordinate value of a rectangular area where the dial plate is located;
step 1.4: and marking according to the vertex coordinate value of the rectangular area to obtain the training picture, and inputting the picture training set into a first deep convolution neural network for positioning training to obtain an instrument positioning neural network. The essence of the step 1.1 is that after an original picture is input into a built deep convolutional neural network, firstly, CNN (convolutional neural network) carries out preprocessing operation on the original picture, and extracts a part containing feature data information in the original picture to obtain a feature map; the essence of step 1.2 is to set a preselected ROI for all points in the feature map by using an RPN network and perform filtering processing, so as to realize the acquisition of position information data of the features of the original picture. Then, canceling quantization through ROIAlign, keeping floating point number and performing pooling to the maximum, and generating a corresponding Fixed Size Feature Map (Fixed Size Feature Map); step 1.3 is essentially to process the fixed-size feature map generated by roilign by a full connected layer to obtain classification information of the feature map (for example, the position of the area where the dial is located, and the relative position area of the pointer); in addition, the Fixed Size Feature Map generated by ROIAlign is processed by a mask component to obtain the profile information of the important Feature, wherein the profile information of the important Feature comprises the area where the position of the instrument pointer is located.
In the process of marking pictures, 200 pointer instrument pictures are marked manually, the used picture marking tool is VGG Image Antotator developed by Visual Geometry Group company, the picture marking tool is open-source Image marking software which can be used under the condition of off-line and on-line, and rectangles, polygons, ellipses, circles, points, lines and the like can be marked manually. The export format of the marked pictures is csv and json files. As shown in fig. 4, an example result of neural network labeling using via labeling: and after the image marking is finished, inputting the marked Json file and 200 marked images in the built first deep convolutional neural network and training. In the training process of TensorFlow, each neuron in the neural network and the weight parameters on the connecting lines between the neurons are gradually adjusted by using a gradient descent method, and finally the predicted region of the neural network and the manually marked region tend to be consistent. After the training is completed, the first deep convolution neural network structure and the parameters of the neural network need to be stored, the storage mode is a file stored in an HDF5 format, and the generated HDF5 file is the instrument positioning neural network of the pointer type instrument panel. 200 unmarked test pictures are input into a trained pointer type instrument panel positioning neural network for identification and confirmation of the accuracy.
In a specific embodiment, as shown in fig. 5 to 6, the analyzing and processing the training picture in step 2 to obtain an optimization target picture specifically includes:
step 2.1: the step 2 of analyzing and processing the training picture to obtain an optimized target picture specifically includes:
step 2.1: obtaining a matrix function based on OpenCV (open source video coding) by the vertex coordinate value of the training picture, wherein the matrix function is getAffine Transform (), and obtaining an affine transformation matrix according to the matrix function;
step 2.2: performing affine transformation on the training picture based on an affine transformation matrix to set a rotation center, a rotation angle and a size of an output image of the training image, wherein the affine transformation adopts a wrapAffine function;
step 2.3: and carrying out affine transformation on the rotation center and the rotation angle of the training image and the size of an output image based on the affine transformation matrix to obtain an optimized target picture.
Further, the method for obtaining the instrument recognition neural network in the step 3 specifically includes:
step 3.1: marking the pointer position and the pointer shape of the pointer instrument by the optimized target picture by using a marking tool;
step 3.2: and inputting the marked optimization target picture into a second deep convolution neural network for carrying out pointer position recognition training for a plurality of times, wherein the HDF5 file obtained by the last pointer position recognition training is the pointer instrument recognition neural network.
This step is to acquire the pointer instrument identification neural network of the pointer instrument. And manually marking the position direction and the shape of the Image pointer of the 200 pictures after affine transformation correction for the second time, wherein a marking tool uses VGG Image Antotator, as shown in FIG. 7, the 200 pictures manually marked for the second time are stored in a json file format, the 200 pictures and the json file format are input into a second built deep convolutional neural network for training, the parameters of each neuron in the neural network are gradually adjusted by using a gradient descent method, and finally the predicted region and category of the neural network and the manually marked region and category have the minimum error. And after the training is finished, storing the structure of the second deep convolution neural network and the parameters of the neural network in a file stored in an HDF5 format, wherein the generated HDF5 file is the pointer instrument recognition neural network.
In a specific embodiment, as shown in fig. 8 to 11, the meter recognition neural network in step 4 is verified, that is, the test pictures in the test set are led into the meter recognition neural network for testing, the meter recognition neural network recognizes the relative positions and the relative angles of the long needle and the short needle of the meter pointer in each test picture, and further calculates a dial reading by using the pointer positions, and the meter positioning neural network and the meter recognition neural network are adopted to verify the meter pointer position pictures by using the test pictures, so as to check the accuracy of the obtained meter positioning neural network and the obtained meter recognition neural network. Firstly, a pointer type instrument image is randomly selected and input into a built deep convolutional neural network, accurate identification of the position of an instrument pointer and reading of instrument readings are finally achieved, finally 1000 pictures are adopted to test the system, only about 30 pictures are identified wrongly, therefore, the final accuracy rate is 97%, the system passes test operation tests of a supervision department, and the accuracy rate of the identification system is finally determined to be 97%.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (5)

1. A pointer instrument reading identification method based on a deep convolutional neural network is characterized by comprising the following steps:
step 1: acquiring a picture training set and a picture test set of the indicating number of the pointer instrument, inputting the picture training set to a first depth convolution neural network for positioning training to obtain an instrument positioning neural network, wherein the picture training set of the indicating number of the pointer instrument comprises a training picture of the indicating number of the pointer instrument, the training picture of the indicating number of the pointer instrument is an original picture with a manually marked area, the picture test set of the indicating number of the pointer instrument comprises a test picture of the indicating number of the pointer instrument, and the test picture of the indicating number of the pointer instrument is an unmarked original picture;
and 2, step: analyzing and processing the training pictures of the pointer instrument readings to obtain optimized target pictures, wherein the optimized target pictures comprise the same image size and the same image rotation center;
and 3, step 3: marking the optimized target picture to obtain a final marked picture, and inputting the final marked picture into a second deep convolutional neural network for recognition training to obtain an instrument recognition neural network;
and 4, step 4: and importing the test picture of the pointer instrument readings into an instrument recognition neural network for verification, and obtaining a pointer instrument reading recognition result.
2. The method as claimed in claim 1, wherein the picture training set and the picture testing set of the pointer instrument readings obtained in step 1 are specifically a picture training set and a picture testing set based on the deep convolutional neural network
Step 1.1: the original picture is a picture shot in an actual application scene, a required characteristic information area in the original picture is marked based on a picture marking tool to obtain the training initial picture, and the required characteristic information area comprises an area where a dial plate of a pointer instrument is located;
step 1.2: inputting training pictures in the training initial picture set into a first deep convolutional neural network, wherein the first deep convolutional neural network receives the whole initial training picture by adopting a CNN (convolutional neural network) model to obtain a characteristic diagram, and the characteristic data information of the characteristic diagram comprises the color, the size and the shape of a dial plate pointer of a pointer instrument; secondly, obtaining candidate areas by adopting a plurality of sliding windows with preset sizes on the feature map, mapping each candidate area to obtain a low latitude feature of the feature map, wherein the low latitude feature at least comprises an area where a dial plate is located, a background area where the dial plate is located and a non-feature area, and the non-feature area is other areas except the area where the dial plate is located and the background area where the dial plate is located;
step 1.3: respectively sending the low-dimensional features to a first full connection layer and a second full connection layer, wherein the first full connection layer and the second full connection layer belong to a first deep convolutional neural network, the first full connection layer is used for classifying and predicting an area where the dial plate is located and a background area where the dial plate is located and outputting a corresponding probability value, the output probability value is an output positioning result, and the second full connection layer is used for regressing the output positioning result to obtain a vertex coordinate value of a rectangular area where the dial plate is located;
step 1.4: and marking according to the vertex coordinate value of the rectangular area to obtain the training picture, and inputting the picture training set into a first deep convolution neural network for positioning training to obtain an instrument positioning neural network.
3. The method for identifying the index of the pointer instrument based on the deep convolutional neural network as claimed in claim 2, wherein the analyzing and processing the training picture in the step 2 to obtain the optimized target picture specifically comprises:
step 2.1: obtaining a matrix function based on OpenCV (open source video coding) by the vertex coordinate value of the training picture, wherein the matrix function is getAffine Transform (), and obtaining an affine transformation matrix according to the matrix function;
step 2.2: performing affine transformation on the training picture based on an affine transformation matrix to set a rotation center, a rotation angle and a size of an output image of the training image, wherein the affine transformation adopts a wrapAffine function;
step 2.3: and carrying out affine transformation on the rotation center and the rotation angle of the training image and the size of an output image based on the affine transformation matrix to obtain an optimized target picture.
4. The method for identifying the indication number of the pointer instrument based on the deep convolutional neural network as claimed in claim 1, wherein the method for obtaining the instrument identification neural network in the step 3 specifically comprises:
step 3.1: marking the pointer position and the pointer shape of the pointer instrument by the optimized target picture by adopting a marking tool;
step 3.2: and inputting the marked optimized target picture into a second deep convolution neural network for carrying out pointer position recognition training for a plurality of times, thereby obtaining an instrument recognition neural network.
5. The method as claimed in claim 1, wherein the meter recognition neural network in step 4 is verified, that is, the test pictures in the test set are introduced into the meter recognition neural network for testing, the meter recognition neural network recognizes the positions of the long needle and the short needle of the meter pointer in each test picture, and the meter positioning neural network and the meter recognition neural network are used to verify the meter pointer position pictures by using the test pictures, so as to verify the accuracy of the obtained meter positioning neural network and the obtained meter recognition neural network.
CN202210922982.4A 2022-08-02 2022-08-02 Pointer instrument indicating number identification method based on deep convolutional neural network Pending CN115311447A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210922982.4A CN115311447A (en) 2022-08-02 2022-08-02 Pointer instrument indicating number identification method based on deep convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210922982.4A CN115311447A (en) 2022-08-02 2022-08-02 Pointer instrument indicating number identification method based on deep convolutional neural network

Publications (1)

Publication Number Publication Date
CN115311447A true CN115311447A (en) 2022-11-08

Family

ID=83859119

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210922982.4A Pending CN115311447A (en) 2022-08-02 2022-08-02 Pointer instrument indicating number identification method based on deep convolutional neural network

Country Status (1)

Country Link
CN (1) CN115311447A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115980116A (en) * 2022-11-22 2023-04-18 宁波博信电器有限公司 High-temperature-resistant detection method and system for instrument panel, storage medium and intelligent terminal

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111160256A (en) * 2019-12-30 2020-05-15 武汉科技大学 Automatic identification method and system for transformer substation pointer instrument
CN111950330A (en) * 2019-05-16 2020-11-17 杭州测质成科技有限公司 Pointer instrument indicating number detection method based on target detection
CN113283419A (en) * 2021-04-29 2021-08-20 国网浙江省电力有限公司湖州供电公司 Convolutional neural network pointer instrument image reading identification method based on attention

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111950330A (en) * 2019-05-16 2020-11-17 杭州测质成科技有限公司 Pointer instrument indicating number detection method based on target detection
CN111160256A (en) * 2019-12-30 2020-05-15 武汉科技大学 Automatic identification method and system for transformer substation pointer instrument
CN113283419A (en) * 2021-04-29 2021-08-20 国网浙江省电力有限公司湖州供电公司 Convolutional neural network pointer instrument image reading identification method based on attention

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
吴晓伟: "《指针式仪表智能识别在矿井中的研究》" *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115980116A (en) * 2022-11-22 2023-04-18 宁波博信电器有限公司 High-temperature-resistant detection method and system for instrument panel, storage medium and intelligent terminal
CN115980116B (en) * 2022-11-22 2023-07-14 宁波博信电器有限公司 High-temperature-resistant detection method and system for instrument panel, storage medium and intelligent terminal

Similar Documents

Publication Publication Date Title
CN112949564B (en) Pointer type instrument automatic reading method based on deep learning
CN110659636B (en) Pointer instrument reading identification method based on deep learning
CN106875381B (en) Mobile phone shell defect detection method based on deep learning
CN111950330B (en) Pointer instrument indication detection method based on target detection
CN103324937B (en) The method and apparatus of label target
CN109635806B (en) Ammeter value identification method based on residual error network
CN108564085B (en) Method for automatically reading of pointer type instrument
CN108596221B (en) Image recognition method and device for scale reading
CN110992349A (en) Underground pipeline abnormity automatic positioning and identification method based on deep learning
CN111368906B (en) Pointer type oil level meter reading identification method based on deep learning
CN108446588A (en) A kind of double phase remote sensing image variation detection methods and system
CN111753877B (en) Product quality detection method based on deep neural network migration learning
CN114241469A (en) Information identification method and device for electricity meter rotation process
CN116844147A (en) Pointer instrument identification and abnormal alarm method based on deep learning
CN115019294A (en) Pointer instrument reading identification method and system
CN115311447A (en) Pointer instrument indicating number identification method based on deep convolutional neural network
CN114627461A (en) Method and system for high-precision identification of water gauge data based on artificial intelligence
CN112561885B (en) YOLOv 4-tiny-based gate valve opening detection method
CN112529003A (en) Instrument panel digital identification method based on fast-RCNN
CN116310263A (en) Pointer type aviation horizon instrument indication automatic reading implementation method
CN116403223A (en) Pointer instrument reading identification method and system based on machine learning
CN116612461A (en) Target detection-based pointer instrument whole-process automatic reading method
Zhang et al. A YOLOv3‐Based Industrial Instrument Classification and Reading Recognition Method
CN114821044B (en) Square pointer instrument indication recognition method based on gradient transformation
CN114494778B (en) Image acquisition processing system for remote monitoring of power equipment and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination