CN113837159A - Instrument reading identification method and device based on machine vision - Google Patents

Instrument reading identification method and device based on machine vision Download PDF

Info

Publication number
CN113837159A
CN113837159A CN202111422893.5A CN202111422893A CN113837159A CN 113837159 A CN113837159 A CN 113837159A CN 202111422893 A CN202111422893 A CN 202111422893A CN 113837159 A CN113837159 A CN 113837159A
Authority
CN
China
Prior art keywords
image
identified
instrument
indicating
pointer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111422893.5A
Other languages
Chinese (zh)
Inventor
申永利
周岐文
李新刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China National Chemical Communications Construction Group Coltd
Original Assignee
China National Chemical Communications Construction Group Coltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China National Chemical Communications Construction Group Coltd filed Critical China National Chemical Communications Construction Group Coltd
Priority to CN202111422893.5A priority Critical patent/CN113837159A/en
Publication of CN113837159A publication Critical patent/CN113837159A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure discloses an instrument reading identification method and device based on machine vision, and relates to the technical field of construction engineering construction. The specific implementation scheme is as follows: acquiring image data in a specific range; the image data comprises a to-be-identified reading image of at least one instrument, the image data is input into a trained real-time target detection framework Yolov5 network model, the to-be-identified reading image and the type of the to-be-identified reading image of the instrument are obtained, the to-be-identified reading image is input into an image detection model determined according to the type, and the reading of the instrument is obtained. Therefore, the reading of the instrument can be automatically read, the reading of the instrument is uploaded through the interface connected with the service system, the accuracy of the instrument reading can be guaranteed, and the efficiency is high.

Description

Instrument reading identification method and device based on machine vision
Technical Field
The disclosure relates to the technical field of construction engineering construction, in particular to an instrument reading identification method and device based on machine vision.
Background
At present, instruments and meters for detecting various indexes are stored in a construction site or a laboratory, and the instruments and meters can display numerical values of specific detected indexes.
In the related art, the readings of the manual reading instrument are input into the business system, but the manual reading mode may have reading errors, missing reading, input errors and the like, and the manual input efficiency is low and the cost is high.
Disclosure of Invention
The disclosure provides an instrument number identification method and device based on machine vision.
According to an aspect of the present disclosure, there is provided a machine vision-based instrument reading identification method, including: acquiring image data in a specific range; the image data comprises a to-be-identified reading image of at least one instrument, the image data is input into a trained real-time target detection framework Yolov5 network model, the to-be-identified reading image and the type of the to-be-identified reading image of the instrument are obtained, the to-be-identified reading image is input into an image detection model determined according to the type, and the reading of the instrument is obtained. Therefore, the image and the type of the indicating number to be recognized can be acquired through image data processing, the image detection model suitable for the image and the type of the indicating number can be determined according to the type, the indicating number is input into the image detection model, the data of the instrument can be acquired, the indicating number of the instrument can be automatically read in the mode, the indicating number of the instrument is uploaded through the interface connected with the service system, the accuracy of the indicating number of the instrument can be guaranteed, and the efficiency is high.
According to a second aspect of the present disclosure, there is provided a machine vision-based instrument reading identification apparatus comprising: the device comprises an image acquisition unit, a detection unit and a processing unit.
The system comprises an image acquisition unit, a processing unit and a display unit, wherein the image acquisition unit is used for acquiring image data in a specific range; the image data includes an image of a reading to be identified of the at least one meter. And the detection unit is used for inputting the image data into a trained real-time target detection framework Yolov5 network model and acquiring the to-be-identified display image and the type of the to-be-identified display image of the instrument. And the processing unit is used for inputting the indicating number image to be identified into the image detection model determined according to the type and acquiring the indicating number of the instrument. Therefore, the image and the type of the indicating number to be recognized can be acquired through image data processing, the image detection model suitable for the image and the type of the indicating number can be determined according to the type, the indicating number is input into the image detection model, the data of the instrument can be acquired, the indicating number of the instrument can be automatically read in the mode, the indicating number of the instrument is uploaded through the interface connected with the service system, the accuracy of the indicating number of the instrument can be guaranteed, and the efficiency is high.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a flow chart according to a first embodiment of the present application;
FIG. 2 is a flow chart according to a second embodiment of the present application;
FIG. 3 is a flowchart of a step S40 according to the second embodiment of the present application;
fig. 4 is a flowchart of another S40 according to the second embodiment of the present application;
FIG. 5 is a block diagram of a third embodiment according to the present application;
fig. 6 is another block diagram according to the third embodiment of the present application.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
The present disclosure provides a machine vision-based instrument reading identification method, and fig. 1 is a flowchart according to a first embodiment of the present disclosure.
As shown in fig. 1, the method includes, but is not limited to, the following steps:
s1: acquiring image data in a specific range; wherein the image data comprises a reading image to be identified of the at least one meter.
In the embodiment of the present disclosure, an image acquisition device may be disposed at a specific position where an instrument image can be acquired, and image data in a specific range may be acquired, or image data in a specific range may be acquired by shooting a specific range including an instrument image through a terminal held by a user.
It should be noted that the above examples are only for illustrative purposes and are not intended to be specific limitations on the embodiments of the present disclosure, and those skilled in the art may adopt any other ways to implement the above functions as needed.
In this disclosure, the image data obtained in this disclosure in a specific range may be a single photo, or may also be video data.
In the embodiment of the present disclosure, the acquired image data in the specific range includes a to-be-identified indicating image of at least one meter, and it is understood that the image data may include the to-be-identified indicating image of one meter or may include the to-be-identified indicating images of a plurality of meters, and the to-be-identified indicating image is a part of the image data. For example, in the case where the image data is an entire image, the image to be recognized may be a part of the entire image, and the entire image may include the image to be recognized, an environment around the meter, and the like.
S2: and inputting the image data into a trained real-time target detection framework Yolov5 network model, and acquiring the to-be-identified reading image and the type of the to-be-identified reading image of the instrument.
In the embodiment of the present disclosure, after the image data in a specific range is acquired, the image data is input to the well-trained Yolov5 network model, and it can be understood that the Yolov5 network model can process the image data in real time, it can process a single photo, and it can also process video data.
The images are input into a well-trained Yolov5 network model, the to-be-recognized reading images and the types of the to-be-recognized reading images of the instruments in the image data can be automatically detected, and the to-be-recognized reading images are generated by cutting.
S3: and inputting the indicating image to be identified into an image detection model determined according to the type, and acquiring the indicating number of the instrument.
In the embodiment of the disclosure, after image data is input into a trained real-time target detection framework Yolov5 network model, a to-be-identified registration image and a type thereof can be obtained, further, an image detection model suitable for registration identification can be determined according to the type of the to-be-identified registration, after the image detection model is determined, the to-be-identified registration image is input into a corresponding image detection model, and the registration of a meter can be obtained.
By implementing the embodiment of the disclosure, image data in a specific range is acquired; the image data comprises a to-be-identified reading image of at least one instrument, the image data is input into a trained real-time target detection framework Yolov5 network model, the to-be-identified reading image and the type of the to-be-identified reading image of the instrument are obtained, the to-be-identified reading image is input into an image detection model determined according to the type, and the reading of the instrument is obtained. Therefore, the image and the type of the indicating number to be recognized can be acquired through image data processing, the image detection model suitable for the image and the type of the indicating number can be determined according to the type, the indicating number is input into the image detection model, the data of the instrument can be acquired, the indicating number of the instrument can be automatically read in the mode, the indicating number of the instrument is uploaded through the interface connected with the service system, the accuracy of the indicating number of the instrument can be guaranteed, and the efficiency is high.
Fig. 2 is a flow chart of a second embodiment of the present disclosure.
As shown in fig. 2, the instrument reading identification method based on machine vision provided by the embodiment of the present disclosure includes, but is not limited to, the following steps:
s10: acquiring image data in a specific range; wherein the image data comprises a reading image to be identified of the at least one meter.
For description of S10 in the embodiment of the present disclosure, reference may be made to the description in S1 in the above embodiment, which is not described herein again.
S20: inputting the image data into a well-trained Yolov5 network model, detecting a first to-be-identified indicating image of the pointer instrument and/or a second to-be-identified indicating image of the digital instrument, and generating a first detection frame of the first to-be-identified indicating image of the pointer instrument and/or a second detection frame of the second to-be-identified indicating image of the digital instrument.
In the embodiment of the present disclosure, the image data includes a to-be-identified indicating image of at least one meter, for example, a first to-be-identified indicating image of at least one pointer meter, or a second to-be-identified indicating image of at least one digital meter, or a first to-be-identified indicating image of at least one pointer meter and a second to-be-identified indicating image of at least one digital meter, which is not particularly limited by the embodiment of the present disclosure.
In the embodiment of the disclosure, image data is input into a trained Yolov5 network model, a first to-be-identified indicating image of a pointer instrument and/or a second to-be-identified indicating image of a digital instrument are detected, and a first detection frame of the first to-be-identified indicating image of the pointer instrument and/or a second detection frame of the second to-be-identified indicating image of the digital instrument are generated.
It can be understood that the first detection frame marks a first to-be-identified indicating image of the pointer instrument, the second detection frame marks a second to-be-identified indicating image of the digital instrument, a plurality of first detection frames may be provided in the case where a plurality of first to-be-identified indicating images of a plurality of pointer instruments exist, and a plurality of second detection frames may be provided in the case where a plurality of second to-be-identified indicating images of a plurality of digital instruments exist.
S30: and cutting the image data through the first detection frame and/or the second detection frame to generate a first to-be-identified indicating image of the pointer instrument and/or a second to-be-identified indicating image of the digital instrument.
Under the condition that a first to-be-identified indicating image of the pointer instrument exists in the image data, a first detection frame marks the first to-be-identified indicating image of the pointer instrument, and an image range marked by the first detection frame is cut out from the image data, so that the first to-be-identified indicating image of the pointer instrument is generated;
similarly, when the second to-be-recognized indicating image of the digital instrument exists in the image data, the second detection frame cuts out the image data to generate the digital second to-be-recognized indicating image.
Similarly, in the case where the first to-be-recognized indicating image of the pointer instrument and the second to-be-recognized indicating image of the digital instrument are simultaneously present in the image data, the first detection frame and the second detection frame simultaneously crop the image data, and the first to-be-recognized indicating image of the pointer instrument and the second to-be-recognized indicating image of the digital instrument can be generated.
S40: inputting the first to-be-identified indicating number image into a pointer type image detection model to obtain the indicating number of a pointer type instrument; and/or inputting the second to-be-identified reading image into the digital image detection model to obtain the reading of the digital instrument.
It should be noted that the image detection model in the embodiment of the present disclosure includes a pointer type image detection model and/or a digital type detection model, and the first to-be-identified indicating number image is input to the pointer type image detection model to obtain the indicating number of the pointer type instrument; and/or inputting the second to-be-identified reading image into the digital image detection model to obtain the reading of the digital instrument.
As shown in fig. 3, in the instrument reading identification method based on machine vision provided in the embodiment of the present disclosure, S40: inputting a first to-be-identified indicating number image into a pointer type image detection model to obtain an indicating number of a pointer type instrument, and the method comprises the following substeps:
S41A: and processing the first to-be-identified reading image to acquire a digital image and a first straight line of the pointer position.
It is understood that the first to-be-identified display image of the pointer instrument includes a pointer and a number, and in general, the pointer is a single pointer, and the number pointed by the pointer is the display of the pointer instrument; the number may be one number or a plurality of numbers, and the value of each number is different.
For example, in the embodiment of the present disclosure, the first to-be-recognized registration image is processed, and the digital image and the first straight line of the pointer position can be acquired.
S42A: inputting the digital image into the deep network model, and acquiring a maximum number s1 and a minimum number s2 in the digital image.
It is understood that the pointer instrument may include a plurality of numbers, and the number may be a single number, a tens number, or a higher number, and the embodiment of the present disclosure does not specifically limit this.
In the embodiment of the disclosure, the digital image is input into the deep network model, and the numbers in the digital image can be identified, so that the maximum number s1 and the minimum number s2 are obtained.
S43A: and calculating a first angle formed by the second straight line and the third straight line by fitting a second straight line of the maximum number s1 and the pointer endpoint and a third straight line of the minimum number s2 and the pointer endpoint through a least square method.
It can be understood that the pointer type instrument is generally a single pointer, one end of the pointer is fixed on the dial plate, and the dial plate numbers are uniformly distributed in an arc shape by taking one end of the pointer as a circle center.
In the embodiment of the present disclosure, the first angle a formed between the second straight line and the third straight line can be calculated by fitting the first straight line of the maximum number s1 and the pointer end point and the third straight line of the minimum number s2 and the pointer end point by the least squares method.
S44A: and calculating a second angle formed by the first straight line and the third straight line.
It will be appreciated that the first line of pointer position is at an angle to a third line to which the minimum number s2 is fitted and the pointer end point, and that a second angle b is calculated between the first line and the third line.
S45A: and calculating to obtain the indicating number of the pointer instrument according to the first angle and the second angle.
In the embodiment of the disclosure, the indication number of the pointer instrument is calculated to be b/a (s 1-s 2) according to the first angle a and the second angle b.
In some embodiments, S41: processing a first to-be-identified reading image to obtain a digital image and a first straight line of a pointer position, comprising:
and carrying out graying and binarization processing on the first to-be-identified index image, positioning the pointer position by adopting a least square method, and segmenting to generate a digital image and a first straight line of the pointer position.
It can be understood that the first to-be-identified display image may be a color image, and the first to-be-identified display image is subjected to a graying process to be converted into a grayscale image, and further, subjected to a binarization process.
In the embodiment of the disclosure, graying and binarization processing are performed on the first image to be recognized, the pointer position is positioned and segmented by adopting a least square method, the digital image can be segmented, and further, a first straight line of the pointer position is fitted.
As shown in fig. 4, in the instrument reading identification method based on machine vision provided in the embodiment of the present disclosure, S40: inputting the second to-be-recognized registration image into the digital image detection model to obtain the registration of the digital instrument, and the method comprises the following steps:
S46B: inputting the second to-be-identified index image into a classification network ResNet model for identification, and acquiring at least one numerical value and position coordinates thereof; wherein the numerical values include numbers from 0 to 9 and decimal points.
It will be appreciated that the digital meter may display at least one numerical value, which may be a number from 0 to 9, and the digital meter may display decimal points, each numerical value being located in a different position, typically arranged in a row, or arranged sequentially from left to right.
In the disclosed embodiment, the classification network ResNet model is capable of identifying at least one numerical value in the second registration image to be identified, identifying the numerical value in the case that one numerical value exists, and identifying each numerical value in the case that a plurality of numerical values exist.
S47B: and combining the numerical values from left to right according to the position coordinates to obtain the numerical value of the digital instrument.
In the embodiment of the disclosure, the numerical values are combined from left to right according to the position coordinates of the numerical values, so that the numerical value of the digital instrument can be obtained.
Fig. 5 is a block diagram of a third embodiment according to the present disclosure.
As shown in fig. 5, an instrument number recognition device 1 based on machine vision is provided in the embodiments of the present disclosure. The meter reading recognition apparatus 1 includes: an image acquisition unit 11, a detection unit 12 and a processing unit 13.
The image acquisition unit 11 is used for acquiring image data within a specific range; wherein the image data comprises a reading image to be identified of the at least one meter.
And the detection unit 12 is used for inputting the image data into a trained real-time target detection framework Yolov5 network model, and acquiring the to-be-identified reading image and the type of the to-be-identified reading image of the instrument.
And the processing unit 13 is used for inputting the indicating number image to be identified into the image detection model determined according to the type and acquiring the indicating number of the instrument.
Fig. 6 is another block diagram according to a third embodiment of the present disclosure.
As shown in fig. 6, an instrument reading identification apparatus 1 based on machine vision provided by the embodiment of the present disclosure, a detection unit 12, further includes:
and the detection frame labeling unit 121 is used for inputting the image data into the trained Yolov5 network model, detecting the first to-be-identified indicating image of the pointer instrument and/or the second to-be-identified indicating image of the digital instrument, and generating a first detection frame of the first to-be-identified indicating image of the pointer instrument and/or a second detection frame of the second to-be-identified indicating image of the digital instrument.
And the cropping unit 122 is used for cropping the image data through the first detection frame and/or the second detection frame to generate a first to-be-identified indicating image of the pointer instrument and/or a second to-be-identified indicating image of the digital instrument.
It should be noted that the above explanation of the instrument number recognition method based on machine vision is also applicable to the instrument number recognition device based on machine vision of this embodiment, and is not repeated here.
By implementing the embodiment of the present disclosure, the image acquisition unit 11 acquires image data within a specific range; the image data comprises a to-be-identified reading image of at least one instrument, the detection unit 12 inputs the image data into a trained real-time target detection framework Yolov5 network model to obtain the to-be-identified reading image and the type of the to-be-identified reading image of the instrument, and the processing unit 13 inputs the to-be-identified reading image into an image detection model determined according to the type to obtain the reading of the instrument. Therefore, the image and the type of the indicating number to be recognized can be acquired through image data processing, the image detection model suitable for the image and the type of the indicating number can be determined according to the type, the indicating number is input into the image detection model, the data of the instrument can be acquired, the indicating number of the instrument can be automatically read in the mode, the indicating number of the instrument is uploaded through the interface connected with the service system, the accuracy of the indicating number of the instrument can be guaranteed, and the efficiency is high.
Throughout the specification and claims, the term "comprising" is to be interpreted as open-ended, inclusive, meaning that it is "including, but not limited to," unless the context requires otherwise. In the description herein, the terms "some embodiments," "exemplary embodiments," "examples," and the like are intended to indicate that a particular feature, structure, material, or characteristic described in connection with the embodiments or examples is included in at least one embodiment or example of the disclosure. The schematic representations of the above terms are not necessarily referring to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be included in any suitable manner in any one or more embodiments or examples.
"plurality" means two or more unless otherwise specified. The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature.
The use of "for" herein means open and inclusive language that does not exclude devices adapted or configured to perform additional tasks or steps.
Additionally, the use of "based on" means open and inclusive, as a process, step, calculation, or other action that is "based on" one or more stated conditions or values may in practice be based on additional conditions or values beyond those stated.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (8)

1. A meter reading identification method based on machine vision is characterized by comprising the following steps:
acquiring image data in a specific range; wherein the image data comprises a reading image to be identified of at least one meter;
inputting the image data into a trained real-time target detection framework Yolov5 network model, and acquiring the indicating image to be identified and the type of the indicating image;
and inputting the indicating number image to be identified into an image detection model determined according to the type, and acquiring the indicating number of the instrument.
2. The method of claim 1, wherein the inputting the image data into a trained real-time target detection framework Yolov5 network model, and the obtaining the reading image to be identified and the type thereof of the meter comprises:
inputting the image data into the trained Yolov5 network model, detecting a first to-be-identified indicating image of the pointer instrument and/or a second to-be-identified indicating image of the digital instrument, and generating a first detection frame of the first to-be-identified indicating image of the pointer instrument and/or a second detection frame of the second to-be-identified indicating image of the digital instrument;
and cutting the image data through the first detection frame and/or the second detection frame to generate the first to-be-identified indicating image of the pointer instrument and/or the second to-be-identified indicating image of the digital instrument.
3. The method of claim 2, wherein the image detection model comprises: a pointer type image detection model and/or a digital type image detection model; the step of inputting the indicating number image to be identified into an image detection model determined according to the type to obtain the indicating number of the instrument comprises the following steps:
inputting the first to-be-identified indicating number image into the pointer type image detection model to obtain the indicating number of the pointer type instrument;
and/or inputting the second to-be-identified reading image into the digital image detection model to obtain the reading of the digital instrument.
4. The method of claim 3, wherein inputting the first to-be-identified reading image to the pointer image detection model to obtain the reading of the pointer instrument comprises:
processing the first to-be-identified reading image to obtain a digital image and a first straight line of the pointer position;
inputting the digital image into a depth network model, and acquiring the maximum number and the minimum number in the digital image;
fitting a second straight line of the maximum number and the pointer endpoint and a third straight line of the minimum number and the pointer endpoint through a least square method, and calculating a first angle formed by the second straight line and the third straight line;
calculating a second angle formed by the first straight line and the third straight line;
and calculating the indicating number of the pointer instrument according to the first angle and the second angle.
5. The method of claim 4, wherein processing the first to-be-identified reading image to obtain a digital image and a first line of pointer positions comprises:
and carrying out graying and binarization processing on the first to-be-identified index image, positioning the pointer position by adopting a least square method, and segmenting to generate the digital image and the first straight line of the pointer position.
6. The method of claim 3, wherein inputting the second registration image to be recognized into the digital image inspection model to obtain the registration of the digital instrument comprises:
inputting the second to-be-identified reading image into a classification network ResNet model for identification, and acquiring at least one numerical value and position coordinates thereof; wherein the numerical values include numbers from 0 to 9 and decimal points;
and combining the numerical values from left to right according to the position coordinates to obtain the number of the digital instrument.
7. An instrument reading identification device based on machine vision, the device comprising:
the image acquisition unit is used for acquiring image data in a specific range; wherein the image data comprises a reading image to be identified of at least one meter;
the detection unit is used for inputting the image data into a trained real-time target detection framework Yolov5 network model and acquiring the indicating image to be identified and the type of the indicating image;
and the processing unit is used for inputting the indicating number image to be identified to the image detection model determined according to the type and acquiring the indicating number of the instrument.
8. The apparatus of claim 7, wherein the detection unit further comprises:
a frame marking unit, which is used for inputting the image data into the trained Yolov5 network model, detecting a first to-be-identified indicating image of the pointer instrument and/or a second to-be-identified indicating image of the digital instrument, and generating a first detection frame of the first to-be-identified indicating image of the pointer instrument and/or a second detection frame of the second to-be-identified indicating image of the digital instrument;
and the cutting unit is used for cutting the image data through the first detection frame and/or the second detection frame to generate the first to-be-identified indicating image of the pointer instrument and/or the second to-be-identified indicating image of the digital instrument.
CN202111422893.5A 2021-11-26 2021-11-26 Instrument reading identification method and device based on machine vision Pending CN113837159A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111422893.5A CN113837159A (en) 2021-11-26 2021-11-26 Instrument reading identification method and device based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111422893.5A CN113837159A (en) 2021-11-26 2021-11-26 Instrument reading identification method and device based on machine vision

Publications (1)

Publication Number Publication Date
CN113837159A true CN113837159A (en) 2021-12-24

Family

ID=78971674

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111422893.5A Pending CN113837159A (en) 2021-11-26 2021-11-26 Instrument reading identification method and device based on machine vision

Country Status (1)

Country Link
CN (1) CN113837159A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419522A (en) * 2022-03-29 2022-04-29 以萨技术股份有限公司 Target object structured analysis method, device and equipment
CN114549814A (en) * 2022-01-25 2022-05-27 福建和盛高科技产业有限公司 Instrument reading identification method based on yolox detection
CN114973260B (en) * 2022-05-16 2023-06-09 广州铁诚工程质量检测有限公司 Intelligent checking method and equipment for hydraulic jack

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176228A (en) * 2011-01-28 2011-09-07 河海大学常州校区 Machine vision method for identifying dial plate information of multi-pointer instrument
WO2013136295A1 (en) * 2012-03-15 2013-09-19 Northstar Telemetrics, S. L. Method for automatically reading a utility meter, retrofittable meter reader and automatic meter reading system using the same
CN110659636A (en) * 2019-09-20 2020-01-07 随锐科技集团股份有限公司 Pointer instrument reading identification method based on deep learning
CN111291691A (en) * 2020-02-17 2020-06-16 合肥工业大学 Deep learning-based substation secondary equipment instrument panel reading detection method
CN112115895A (en) * 2020-09-24 2020-12-22 深圳市赛为智能股份有限公司 Pointer type instrument reading identification method and device, computer equipment and storage medium
CN112529003A (en) * 2020-12-09 2021-03-19 安徽工业大学 Instrument panel digital identification method based on fast-RCNN
CN112699876A (en) * 2021-03-24 2021-04-23 中海油能源发展股份有限公司采油服务分公司 Automatic reading method for various meters of gas collecting station
CN112906694A (en) * 2021-03-25 2021-06-04 中国长江三峡集团有限公司 Reading correction system and method for inclined pointer instrument image of transformer substation
CN112966711A (en) * 2021-02-01 2021-06-15 北京大学 Pointer instrument indicating number identification method and system based on convolutional neural network
CN113361539A (en) * 2021-05-21 2021-09-07 煤炭科学技术研究院有限公司 Instrument reading method and device of underground inspection robot and electronic equipment

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176228A (en) * 2011-01-28 2011-09-07 河海大学常州校区 Machine vision method for identifying dial plate information of multi-pointer instrument
WO2013136295A1 (en) * 2012-03-15 2013-09-19 Northstar Telemetrics, S. L. Method for automatically reading a utility meter, retrofittable meter reader and automatic meter reading system using the same
CN110659636A (en) * 2019-09-20 2020-01-07 随锐科技集团股份有限公司 Pointer instrument reading identification method based on deep learning
CN111291691A (en) * 2020-02-17 2020-06-16 合肥工业大学 Deep learning-based substation secondary equipment instrument panel reading detection method
CN112115895A (en) * 2020-09-24 2020-12-22 深圳市赛为智能股份有限公司 Pointer type instrument reading identification method and device, computer equipment and storage medium
CN112529003A (en) * 2020-12-09 2021-03-19 安徽工业大学 Instrument panel digital identification method based on fast-RCNN
CN112966711A (en) * 2021-02-01 2021-06-15 北京大学 Pointer instrument indicating number identification method and system based on convolutional neural network
CN112699876A (en) * 2021-03-24 2021-04-23 中海油能源发展股份有限公司采油服务分公司 Automatic reading method for various meters of gas collecting station
CN112906694A (en) * 2021-03-25 2021-06-04 中国长江三峡集团有限公司 Reading correction system and method for inclined pointer instrument image of transformer substation
CN113361539A (en) * 2021-05-21 2021-09-07 煤炭科学技术研究院有限公司 Instrument reading method and device of underground inspection robot and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114549814A (en) * 2022-01-25 2022-05-27 福建和盛高科技产业有限公司 Instrument reading identification method based on yolox detection
CN114419522A (en) * 2022-03-29 2022-04-29 以萨技术股份有限公司 Target object structured analysis method, device and equipment
CN114973260B (en) * 2022-05-16 2023-06-09 广州铁诚工程质量检测有限公司 Intelligent checking method and equipment for hydraulic jack

Similar Documents

Publication Publication Date Title
CN110659636B (en) Pointer instrument reading identification method based on deep learning
CN113837159A (en) Instrument reading identification method and device based on machine vision
CN109948469B (en) Automatic inspection robot instrument detection and identification method based on deep learning
Chi et al. Machine vision based automatic detection method of indicating values of a pointer gauge
CN109886928B (en) Target cell marking method, device, storage medium and terminal equipment
JP2007114828A (en) Image processing apparatus and image processing method
CN111950396B (en) Meter reading neural network identification method
CN111353502B (en) Digital table identification method and device and electronic equipment
CN113065538A (en) Pressure sensor detection method, device and equipment based on image recognition
JP6803940B2 (en) Remote meter reading computer, its method and program
CN115588196A (en) Pointer type instrument reading method and device based on machine vision
JP2020181467A (en) Meter reading system, meter reading method, and program
CN114037993B (en) Substation pointer instrument reading method and device, storage medium and electronic equipment
JP2019158748A (en) Analog device meter-read system and method
CN114255458A (en) Method and system for identifying reading of pointer instrument in inspection scene
CN112215977B (en) Process industrial equipment inspection data processing method, inspection system, device and medium
JP2020181466A (en) Meter reading system, meter reading method, and program
CN117809315A (en) Universal instrument identification method and device and computer equipment
CN114264407B (en) Method for detecting measurement accuracy of pointer type pressure gauge, computer medium and computer
CN113566863B (en) Pointer table reading method and device
CN116828342A (en) Instrument registration recognition system based on vision
CN117351229A (en) Meter reading method and device and storage medium
CN118172779A (en) Method and device for identifying pointer and electronic equipment
CN115272390A (en) Image segmentation method and device
JP2020181464A (en) Meter reading system, meter reading method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20211224

RJ01 Rejection of invention patent application after publication