CN112115898B - Multi-pointer instrument detection method and device, computer equipment and storage medium - Google Patents

Multi-pointer instrument detection method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN112115898B
CN112115898B CN202011018641.1A CN202011018641A CN112115898B CN 112115898 B CN112115898 B CN 112115898B CN 202011018641 A CN202011018641 A CN 202011018641A CN 112115898 B CN112115898 B CN 112115898B
Authority
CN
China
Prior art keywords
pointer
area
target detection
information
initial image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011018641.1A
Other languages
Chinese (zh)
Other versions
CN112115898A (en
Inventor
胡懋成
王秋阳
郑博超
彭超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Sunwin Intelligent Co Ltd
Original Assignee
Shenzhen Sunwin Intelligent Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Sunwin Intelligent Co Ltd filed Critical Shenzhen Sunwin Intelligent Co Ltd
Priority to CN202011018641.1A priority Critical patent/CN112115898B/en
Publication of CN112115898A publication Critical patent/CN112115898A/en
Application granted granted Critical
Publication of CN112115898B publication Critical patent/CN112115898B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16YINFORMATION AND COMMUNICATION TECHNOLOGY SPECIALLY ADAPTED FOR THE INTERNET OF THINGS [IoT]
    • G16Y40/00IoT characterised by the purpose of the information processing
    • G16Y40/10Detection; Monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/02Recognising information on displays, dials, clocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computing Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a multi-pointer instrument detection method, a multi-pointer instrument detection device, computer equipment and a storage medium, wherein the multi-pointer instrument detection method comprises the steps of obtaining an image of an instrument to obtain an initial image; inputting the initial image into a target detection model for recognition to obtain a target detection result; checking the target detection result to obtain checked information; cutting the initial image to obtain a high-definition pointer instrument picture; inputting the high-definition pointer instrument picture into an example segmentation model for segmentation to obtain different state region segmentation results, pointer region segmentation results and pointer instrument panel reflection classification results; carrying out dominant color recognition on the segmentation results of the different state areas, and dividing a safety area and an alarm area in the state areas according to configuration information; acquiring pointer information according to the pointer region segmentation result; judging whether the pointer is in the alarm area or not; if yes, generating alarm information. The invention realizes the safety of accurately predicting the region where the pointer is located, has high accuracy and supports the identification of multiple pointers.

Description

Multi-pointer instrument detection method and device, computer equipment and storage medium
Technical Field
The invention relates to an early warning method of an safety monitoring metering instrument, in particular to a multi-pointer instrument detection method, a multi-pointer instrument detection device, computer equipment and a storage medium.
Background
Most of safety supervision measuring instruments installed in the existing production enterprises are mechanical pointer instruments, and when the safety production networking is modified, the safety supervision measuring instruments are difficult to directly access the Internet of things early warning system. The main processing mode in the market is a mode of shooting a direct-reading meter, namely, a camera is used for shooting a picture of an instrument panel, then a numerical value displayed at a pointer of the instrument is read out through an image recognition technology, and whether an early warning event is triggered or not is judged according to the numerical value. However, this approach has a relatively low recognition rate and is also less versatile.
Chinese patent CN201910294823.2 provides a pointer instrument early warning method based on image recognition, which is to obtain an edge profile map by converting an image into a binary image to detect edge profiles of all objects; finding all straight lines of the outline map through a straight line detection algorithm; the instrument pointer is obtained through constraint conditions and whether the straight line is in the early warning area range is judged, but the detection effect of the method on the thin pointer is poor, only single pointer identification can be supported, the picture effect of reflection and shadow on the dial is greatly reduced, and the accuracy is not high.
Therefore, a new method is needed to be designed, the safety of the area where the pointer is located is accurately predicted, the accuracy is high, and multi-pointer identification is supported.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a multi-pointer instrument detection method, a multi-pointer instrument detection device, computer equipment and a storage medium.
In order to achieve the above purpose, the present invention adopts the following technical scheme: the multi-pointer instrument detection method comprises the following steps:
acquiring an image of the instrument to obtain an initial image;
inputting the initial image into a target detection model for recognition to obtain a target detection result;
Checking the target detection result to obtain checked information;
Cutting the initial image according to the target detection result and the checked information to obtain a high-definition pointer instrument picture;
Inputting the high-definition pointer instrument picture into an instance segmentation model to carry out instance segmentation so as to obtain different state region segmentation results, pointer region segmentation results and pointer instrument panel reflection classification results;
carrying out dominant color recognition on the segmentation results of the different state areas, and dividing a safety area and an alarm area in the state areas according to configuration information;
acquiring pointer information according to the pointer region segmentation result;
judging whether the pointer is in the alarm area according to the pointer area and the alarm area;
If the pointers are in the alarm area, generating alarm information of each pointer, and feeding back the alarm information to the terminal;
The target detection model is obtained by training CENTERNET models by taking a plurality of images with instrument coordinates and class labels as sample sets;
The example segmentation model is obtained by training DetectoRS a model by taking a plurality of pictures with masks of different state areas, different state area category labels, pointer area masks, pointer area categories and pointer instrument panel reflection category labels as a sample set.
The further technical scheme is as follows: the verifying the target detection result to obtain verified information includes:
judging whether the confidence coefficient of the target detection result exceeds a set confidence coefficient threshold value;
If the confidence coefficient of the target detection result does not exceed the set confidence coefficient threshold value, fine-tuning the sampling position, and executing the acquired image of the instrument to obtain an initial image;
If the confidence coefficient of the target detection result exceeds the set confidence coefficient threshold value, screening the target detection result with the confidence coefficient higher than the set confidence coefficient threshold value to obtain a recognition object;
judging whether overlapping contents exist in the identification object;
if the overlapped contents exist in the identification objects, selecting the object with the highest confidence coefficient to obtain checked information;
If the identification object does not have overlapped contents, judging whether the aspect ratio of the identification object is within a set aspect ratio threshold value range and whether the ratio of the detection frame formed by the identification object to the initial image is consistent with the range of the set area ratio threshold value;
if the aspect ratio of the identification object is within the set threshold range of the aspect ratio and the initial image occupation ratio of the detection frame formed by the identification object is in accordance with the set threshold range of the area occupation ratio, the identification object is verified information;
If the aspect ratio of the identification object is not within the set threshold range of the aspect ratio or the ratio of the detection frame occupied by the identification object to the initial image is not in accordance with the set threshold range of the area occupied by the identification object, fine-tuning the sampling position and executing the image of the acquisition instrument to obtain the initial image.
The further technical scheme is as follows: the main color recognition is carried out on the segmentation results of the different state areas, and the safety area and the alarm area in the state areas are divided according to the configuration information, and the method comprises the following steps:
converting the segmentation results of the different state areas into HSV color space to obtain color characterization values of the different areas;
Performing color clustering on each region through DBScan algorithm, and selecting a color value corresponding to the center of the largest cluster of each region cluster to obtain the main color of each region color;
and dividing a security area and an alarm area in the status area according to the dominant color of each area and the configuration information.
The further technical scheme is as follows: the method for dividing the security area and the alarm area in the status area according to the dominant color of each area and the configuration information comprises the following steps:
and calculating Euclidean distance between the preset alarm area primary color value and the primary color value of each state area according to the configuration file, and selecting the minimum Euclidean distance to determine the safety area and the alarm area.
The further technical scheme is as follows: the obtaining pointer information according to the pointer region segmentation result includes:
calculating the area corresponding to the dividing result of the pointer area to obtain the pointer area;
Calculating the average value of all pixel points of each RGB channel in the region corresponding to the pointer region segmentation result to obtain pointer representative color;
and integrating the pointer area and the pointer representative color to obtain pointer information.
The further technical scheme is as follows: the example segmentation model is obtained by taking a plurality of pictures with masks of different state areas, different state area category labels, pointer area masks, pointer area categories and pointer instrument panel reflection category labels as a sample set training DetectoRS model, and comprises the following steps:
And training DetectoRS a model after carrying out augmentation and normalization operations on a plurality of pictures with masks in different state areas, area category labels in different states, pointer area masks, pointer area category labels and pointer instrument panel reflection category labels so as to obtain an example segmentation model.
The invention also provides a multi-pointer instrument detection device, which comprises:
an initial image acquisition unit for acquiring an image of the meter to obtain an initial image;
the target detection unit is used for inputting the initial image into the target detection model for recognition so as to obtain a target detection result;
The verification unit is used for verifying the target detection result to obtain verified information;
the clipping unit is used for clipping the initial image according to the target detection result and the checked information so as to obtain a high-definition pointer instrument picture;
the example segmentation unit is used for inputting the high-definition pointer instrument picture into an example segmentation model to carry out example segmentation so as to obtain different state region segmentation results, pointer region segmentation results and pointer instrument panel reflection classification results;
the state area determining unit is used for carrying out dominant color recognition on the segmentation results of the different state areas and dividing a safety area and an alarm area in the state areas according to the configuration information;
A pointer information acquisition unit for acquiring pointer information according to the pointer region division result;
the judging unit is used for judging whether the pointer is in the alarm area according to the pointer area and the alarm area;
And the alarm information generating unit is used for generating alarm information of each pointer if the pointer is in the alarm area and feeding back the alarm information to the terminal.
The further technical scheme is as follows: the verification unit includes:
The confidence judging subunit is used for judging whether the confidence of the target detection result exceeds a set confidence threshold value; if the confidence coefficient of the target detection result does not exceed the set confidence coefficient threshold value, fine-tuning the sampling position and executing the acquired image of the instrument to obtain an initial image;
the screening subunit is used for screening the target detection result with the confidence coefficient higher than the set confidence coefficient threshold value to obtain the identification object if the confidence coefficient of the target detection result exceeds the set confidence coefficient threshold value;
an overlap judging subunit, configured to judge whether overlapping content exists in the identification object; if the overlapped contents exist in the identification objects, selecting the object with the highest confidence coefficient to obtain checked information;
A proportion judging subunit, configured to judge whether an aspect ratio of the identified object is within a set aspect ratio threshold range and whether a detection frame formed by the identified object occupies an initial image proportion within the set area ratio threshold range if there is no overlapping content in the identified object; if the aspect ratio of the identification object is within the set threshold range of the aspect ratio and the initial image occupation ratio of the detection frame formed by the identification object is in accordance with the set threshold range of the area occupation ratio, the identification object is verified information; if the aspect ratio of the identification object is not within the set threshold range of the aspect ratio or the ratio of the detection frame occupied by the identification object to the initial image is not in accordance with the set threshold range of the area occupied by the identification object, fine-tuning the sampling position and executing the image of the acquisition instrument to obtain the initial image.
The invention also provides a computer device which comprises a memory and a processor, wherein the memory stores a computer program, and the processor realizes the method when executing the computer program.
The present invention also provides a storage medium storing a computer program which, when executed by a processor, performs the above-described method.
Compared with the prior art, the invention has the beneficial effects that: according to the method, the device and the system, the images of the instrument are shot, the target detection model is adopted for detection, verification is carried out after a target detection result is obtained, the verified information is transmitted to the instance segmentation model for instance segmentation, different areas and different pointers are identified, whether the pointers are in an alarm area or not is judged according to the coincidence degree of the different pointers and the different areas, alarm information corresponding to each pointer is timely generated, the safety of accurately predicting the area where the pointer is located is achieved, the accuracy is high, multiple pointers can be identified at one time by adopting the instance segmentation mode, and multi-pointer identification is supported.
The invention is further described below with reference to the drawings and specific embodiments.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario of a multi-pointer instrument detection method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method for detecting a multi-pointer instrument according to an embodiment of the present invention;
FIG. 3 is a schematic sub-flowchart of a method for detecting a multi-pointer instrument according to an embodiment of the present invention;
FIG. 4 is a schematic sub-flowchart of a multi-pointer instrument detection method according to an embodiment of the present invention;
FIG. 5 is a schematic sub-flowchart of a method for detecting a multi-pointer instrument according to an embodiment of the present invention;
FIG. 6 is a schematic block diagram of a multi-pointer instrument detection device provided by an embodiment of the present invention;
fig. 7 is a schematic block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1 and fig. 2, fig. 1 is a schematic diagram of an application scenario of a multi-pointer instrument detection method according to an embodiment of the present invention. Fig. 2 is a schematic flow chart of a multi-pointer instrument detection method according to an embodiment of the present invention. The multi-pointer instrument detection method is applied to the server. The server performs data interaction with the terminal, the server performs data interaction with the mobile robot, the mobile robot acquires an image of the instrument, the server performs target detection and instance segmentation on the image, whether the pointer falls into an alarm area or not is judged, alarm information is generated, and the alarm information is sent to the terminal for display.
Fig. 2 is a flow chart of a method for detecting a multi-pointer instrument according to an embodiment of the present invention. As shown in fig. 2, the method includes the following steps S110 to S150.
S110, acquiring an image of the instrument to obtain an initial image.
In the present embodiment, the initial image refers to an image of the pointer meter.
The mobile robot moves the mobile robot to a designated position through positioning, and the color picture is acquired on the cabinet containing the pointer instrument through the holder of the mobile robot.
S120, inputting the initial image into a target detection model for recognition so as to obtain a target detection result.
In this embodiment, the target detection result refers to coordinate information of the meter, a corresponding confidence coefficient, and a type of the pointer meter; the coordinate information of the instrument can form a prediction frame.
Specifically, the target detection model is obtained by training CENTERNET the model by using a plurality of images with instrument coordinates and class labels as a sample set.
The CENTERNET model predicts the position information of the target frame by predicting the center point of the target detection frame and the width and height of the target detection frame. In the selection of the feature extraction neural network model, a DLA (Diffusion limited agglomeration) model is selected to obtain a 128×128×256 feature map. And predicting the center point of the target frame, namely, predicting the heat map, predicting the width and the height of the target frame and predicting the offset due to the receptive field through the obtained characteristic map. And finally, obtaining the coordinates of the final prediction frame by combining the prediction results of the three.
The Loss value Loss for the CENTERNET model has three aspects, namely, loss of target center point Loss center, loss of target center point offset Loss offset, and Loss of target box size Loss wh. The following formula is satisfied: loss=loss center+αLossoffset+βLosswh, where α=0.1, β=0.8, loss center employs Focal Loss to solve the class imbalance problem. The Focal loss is mainly used for solving the problem of serious unbalance of the proportion of positive and negative samples in one-stage target detection. The loss function reduces the weight of a large number of simple negative samples in training and can also be understood as a difficult sample mining.
Specifically, a plurality of images with position labels of the instruments are segmented into a training set, a verification set and a test set according to a proportion of 8:1:1. The learning rate is initialized to be 0.01 in the training process, and the learning rate is attenuated at the time of 100, 150 and 200 periods respectively, wherein the coefficient gamma of attenuation is 0.1. In the training process, the Adam method is also used for gradient descent so as to train a model, and momentum and self-adaptive learning rate are used for accelerating convergence. The training method adopts finetune method, and the batch size of training samples is selected to be 4. And controlling the number of training rounds by setting a period during training, observing whether the verification set loss is converged or not, stopping training if the verification set loss is converged, and otherwise, continuing training on the basis of the original model weight parameters.
And (3) detecting different instrument panel targets to obtain a prediction result, and evaluating the prediction result through mAP (average Precision) indexes.
S130, checking the target detection result to obtain checked information.
In this embodiment, the verified information refers to coordinate information of the position of the meter that has passed verification.
In one embodiment, referring to fig. 3, the step S130 may include steps S131 to S137.
S131, judging whether the confidence coefficient of the target detection result exceeds a set confidence coefficient threshold value;
S132, if the confidence coefficient of the target detection result does not exceed the set confidence coefficient threshold value, fine-tuning the sampling position, and executing the step S110;
And S133, screening the target detection results with the confidence coefficient higher than the set confidence coefficient threshold value to obtain the identification objects if the confidence coefficient of the target detection results exceeds the set confidence coefficient threshold value.
In this embodiment, the recognition object is a target detection result with a confidence higher than 0.8, and mainly includes coordinate information of the meter.
Screening out a prediction frame with the confidence coefficient higher than 0.8 as an identification object, and transmitting a signal to the mobile robot for resampling if the sampled image does not have the predicted pointer instrument result.
S134, judging whether overlapping contents exist in the identification object;
s135, if the overlapped contents exist in the identification objects, selecting the object with the highest confidence degree to obtain checked information.
In this embodiment, the overlapped contents refer to prediction frame overlapping. If the initial image contains overlapping predicted target frames, the data needs to be resampled and identified by the mobile robot.
S136, if no overlapped content exists in the identification object, judging whether the aspect ratio of the identification object is within a set aspect ratio threshold range and whether the ratio of the detection frame formed by the identification object to the initial image is in the set area ratio threshold range;
s137, if the aspect ratio of the identification object is within the set threshold range of the aspect ratio and the ratio of the detection frame occupied by the identification object to the initial image is in accordance with the set threshold range of the area occupied by the identification object, the identification object is verified information;
If the aspect ratio of the recognition object is not within the set threshold range of aspect ratio or the ratio of the detection frame occupied by the recognition object to the initial image is not in accordance with the set threshold range of area ratio, step S132 is executed.
If the pointer instrument shadow effect is identified in the instance segmentation stage, the sampling position is finely adjusted, and the acquired instrument image is executed to obtain an initial image.
Specifically, the shape and the size of a predicted frame of the pointer instrument are judged, if the aspect ratio epsilon of the predicted frame is required to be more than or equal to 0.3 and less than or equal to 3, the predicted frame of the pointer instrument occupies an initial image proportion delta less than 0.2, and if the condition is not met, re-shooting sampling is required, so that a high-definition pointer instrument image can be obtained better.
And S140, cutting the initial image according to the target detection result and the checked information to obtain a high-definition pointer instrument picture.
In this embodiment, the high-definition pointer meter picture refers to a picture including only a meter region.
Specifically, clipping is carried out on the initial image according to the obtained coordinate information of the instrument, and a high-definition pointer instrument picture is obtained.
And S150, inputting the high-definition pointer instrument picture into an instance segmentation model to carry out instance segmentation so as to obtain different state region segmentation results, pointer region segmentation results and pointer instrument panel reflection classification results.
In this embodiment, the different state region segmentation result refers to masks of different regions, specifically including masks of a safety region and an alarm region; the pointer region segmentation result refers to masks corresponding to different pointers, and the pointer instrument panel reflection classification result refers to judging whether the pointer instrument panel reflects light or not. Dominant color recognition may be entered if the pointer dashboard is not affected by shadow factors.
In this embodiment, the example segmentation model is obtained by training DetectoRS a model using a plurality of pictures with masks of different state areas, different state area category labels, pointer area masks, pointer area categories, and pointer dashboard reflection category labels as a sample set.
Specifically, after the operations of amplifying and normalizing a plurality of pictures with different state areas, different state area category labels, pointer area masks, pointer area category labels and pointer instrument panel reflection category labels are performed, a DetectoRS model is trained to obtain an example segmentation model.
The DetectoRS model proposes two new methods in the feature extraction stage, RFP (recursive feature pyramid ) and SAC (switchable hole convolution, switchable Atrous Convolution), respectively. The RFP can better extract the picture semantics and the spatial characteristics, and the RFP adds the additional feedback connection of the characteristic pyramid network into a back bone layer from bottom to top; based on the method, SAC is adopted to enable the model to better select a proper receptive field, so that the target detection of models with different sizes is facilitated, the characteristics are convolved with different void ratios, the convolved results are combined by using a switch function, and the SAC can realize the effective conversion from standard convolution to Contional convolution without changing any pre-training model.
A reflective classification layer is added to the DetectoRS network to prevent the effect of illumination on pointer detection. Specifically, the ASPP layer result with unrolled iteration being 1 is subjected to up-sampling and concate to obtain fusion characteristics, then 23 x3 reel layers and a global pooling layer are connected, the feature map is converted into a feature matrix with the size of 1x256 through a flat, and finally the result is subjected to two classification through a 256x2 full-connection layer and a sigmoid function to obtain a final prediction result, and whether the instrument diagram has a reflection condition or not is judged.
The loss value loss of DetectoRS model training process mainly comprises three kinds of regression frame loss bbox, loss mask of example segmentation mask and classification loss cls loss. In this embodiment, the reflection sorting layer is added, so that reflection sorting loss reflection loss is added.
Specifically, in the process of training DetectoRS models, a plurality of mask labels with different areas and pictures of the pointer mask labels are taken as sample sets according to a ratio of 8:1:1, and are segmented into training sets, verification sets and test sets. In order to balance data during training, the same target number is selected for each target type, and corresponding amplification and normalization operations are performed on the pictures. The learning rate initial value was set to 0.0001, and the first-order attenuation rate was 0.5. Model training was performed using an Adam optimizer for gradient descent during training, with a size of 4 being chosen in training for each batch of samples. And when training, adopting an early stop method stopping strategy to continuously print the observation verification set loss, and stopping training if the observation verification set loss is in a convergence state. Meanwhile, in order to classify whether the pointer instrument reflects light or not, each training batch is guaranteed to contain the light-reflecting pointer instrument and the light-non-reflecting pointer instrument in training, and the proportion is 1:1.
Based on the deep learning framework, the safety of the area where the pointer is located can be accurately predicted, the safety area and the early warning area of the dial can be distinguished according to the color of the dial by combining a traditional machine learning algorithm, whether the pointer instrument in the picture has a reflection influence or not is judged at the same time, if the pointer instrument has a reflection phenomenon, the sampling position is adjusted, and the step S110 is executed.
S160, carrying out dominant color recognition on the segmentation results of the different state areas, and dividing the security area and the alarm area in the state areas according to the configuration information.
In this embodiment, the alarm region refers to an unsafe region, and the safe region refers to a region with high safety.
In one embodiment, referring to fig. 4, the step S160 may include steps S161 to S163.
S161, converting the state region segmentation result into an HSV color space to obtain color characterization values of different regions.
In this embodiment, the color characterization values of the different regions refer to values formed by converting masks of the different regions in the meter into HSV color feature expressions.
S162, performing color clustering on each region through DBScan algorithm, and selecting a color value corresponding to the center of the largest cluster of each region cluster to obtain the dominant color of each region color.
In this embodiment, each region dominant color value refers to a color characterization value corresponding to the center of the largest cluster after each region color cluster.
S163, dividing a security area and an alarm area in the status area according to the dominant color of each area and the configuration information.
In this embodiment, the euclidean distance between the preset alarm area primary color value and the primary color value of each state area is calculated according to the configuration file, and the minimum euclidean distance is selected to determine the safety area and the alarm area.
And converting the region segmentation result into an HSV color space, respectively carrying out color clustering on each region through DBScan algorithm, and selecting the center of the largest cluster of each region cluster as the dominant color of the region. The safety area and the alarm area are defined through HSV color configuration files, whether the property of each area is a safety area or an alarm area is distinguished, euclidean distance is calculated through the preset alarm area primary color value of the configuration files and the primary color value of the identified area, and the state of the area is distinguished through the minimum Euclidean distance.
S170, acquiring pointer information according to the pointer region segmentation result.
In the present embodiment, pointer information refers to pointer color and pointer size.
In one embodiment, referring to fig. 5, step S170 may be steps S171 to S173.
S171, calculating the area corresponding to the pointer region segmentation result to obtain the pointer area.
In this embodiment, the pointer area refers to the size of the pointer.
S172, calculating the average value of all pixel points of all RGB channels in the region corresponding to the pointer region segmentation result to obtain the pointer representative color.
In the present embodiment, the pointer representing color refers to an average pixel value of each pixel point of the pointer region.
And S173, integrating the pointer area and the pointer representative color to obtain pointer information.
The collection of pointer information is in two ways pointer color and pointer size, respectively. The area size of the different pointers is defined by calculating the area of the pointer division result, and the average pixel value of each pixel point on the pointer area is calculated as the pointer representative color.
S180, judging whether the pointer is in the alarm area according to the pointer area and the alarm area.
S190, if the pointers are in the alarm area, generating alarm information of each pointer, and feeding back the alarm information to the terminal;
if the pointer is not within the alarm area, the step S110 is performed.
Specifically, the pointer region is different regions determined according to the pointer region segmentation result, and the overlapping area of different pointers and mask regions of different regions is judged, namely the area of the mask regions of different regions where different pointers fall is judged, if the pointers are in the safety region, the alarm is not given, if the pointers are in the alarm region, the alarm is given, and if the pointers respectively cross the safety and alarm regions, the alarm is given. And finally, sending out different alarm feedback according to the information of different pointers.
The color and position information of different pointers can be obtained based on example segmentation, the pointer instrument is automatically early-warned by distinguishing intersection areas of different pointers and dial areas, and different early-warning information can be received according to the pointer information.
According to the multi-pointer instrument detection method, the images of the instrument are shot, the target detection model is adopted for detection, verification is carried out after the target detection result is obtained, the verified information is transmitted to the instance segmentation model for instance segmentation, different areas and different pointers are identified, whether the pointers are in an alarm area or not is judged according to the coincidence degree of the different pointers and the different areas, alarm information is generated corresponding to each pointer in time, the safety of accurately predicting the area where the pointer is located is achieved, the accuracy is high, multiple pointers can be identified at one time by adopting the instance segmentation mode, and multi-pointer identification is supported.
Fig. 6 is a schematic block diagram of a multi-pointer instrument detection apparatus 300 according to an embodiment of the present invention. As shown in fig. 6, the present invention also provides a multi-pointer meter detection apparatus 300 corresponding to the above multi-pointer meter detection method. The multi-pointer meter detection apparatus 300 includes a unit for performing the multi-pointer meter detection method described above, and may be configured in a server. Specifically, referring to fig. 6, the multi-pointer meter detection apparatus 300 includes an initial image acquisition unit 301, a target detection unit 302, a verification unit 303, a clipping unit 304, an instance division unit 305, a region determination unit 306, a pointer information acquisition unit 307, a judgment unit 308, and an alarm information generation unit 309.
An initial image acquisition unit 301, configured to acquire an image of the meter to obtain an initial image; the target detection unit 302 is configured to input an initial image into the target detection model for recognition, so as to obtain a target detection result; a verification unit 303, configured to verify the target detection result to obtain verified information; the clipping unit 304 is configured to clip the initial image according to the target detection result and the verified information, so as to obtain a high-definition pointer instrument picture; the instance segmentation unit 305 is configured to input the high-definition pointer instrument picture into an instance segmentation model to perform instance segmentation, so as to obtain a segmentation result of a region in different states, a segmentation result of a pointer region, and a reflection classification result of a pointer instrument panel; the area determining unit 306 is configured to perform dominant color recognition on the segmentation results of the different status areas, and divide the security area and the alarm area in the status areas according to the configuration information; a pointer information acquisition unit 307 for acquiring pointer information according to the pointer region division result; a judging unit 308, configured to judge whether the pointer is in the alarm area according to the pointer area and the alarm area; and the alarm information generating unit 309 is configured to generate alarm information of each pointer if the pointer is in the alarm area, and feed back the alarm information to the terminal.
The target detection model is obtained by training CENTERNET models by taking a plurality of images with instrument coordinates and class labels as sample sets; the example segmentation model is obtained by training DetectoRS a model by taking a plurality of pictures with masks of different state areas, different state area category labels, pointer area masks, pointer area categories and pointer instrument panel reflection category labels as a sample set.
In one embodiment, the verification unit 303 includes a confidence determination subunit, a filtering subunit, an overlap determination subunit, and a proportion determination subunit.
The confidence judging subunit is used for judging whether the confidence of the target detection result exceeds a set confidence threshold value; if the confidence coefficient of the target detection result does not exceed the set confidence coefficient threshold value, fine-tuning the sampling position and executing the acquired image of the instrument to obtain an initial image; the screening subunit is used for screening the target detection result with the confidence coefficient higher than the set confidence coefficient threshold value to obtain the identification object if the confidence coefficient of the target detection result exceeds the set confidence coefficient threshold value; an overlap judging subunit, configured to judge whether overlapping content exists in the identification object; if the overlapped contents exist in the identification objects, selecting the object with the highest confidence coefficient to obtain checked information; a proportion judging subunit, configured to judge whether an aspect ratio of the identified object is within a set aspect ratio threshold range and whether a detection frame formed by the identified object occupies an initial image proportion within the set area ratio threshold range if there is no overlapping content in the identified object; if the aspect ratio of the identification object is within the set threshold range of the aspect ratio and the initial image occupation ratio of the detection frame formed by the identification object is in accordance with the set threshold range of the area occupation ratio, the identification object is verified information; if the aspect ratio of the identification object is not within the set threshold range of the aspect ratio or the ratio of the detection frame occupied by the identification object to the initial image is not in accordance with the set threshold range of the area occupied by the identification object, fine-tuning the sampling position and executing the image of the acquisition instrument to obtain the initial image.
In an embodiment, the region determination unit 306 includes a token value conversion subunit, a clustering subunit, and a determination subunit.
The representation value conversion subunit is used for converting the segmentation results of the different state areas into an HSV color space so as to obtain color representation values of the different areas; the clustering subunit performs color clustering on each region through DBScan algorithm, and selects a color value corresponding to the center of the largest cluster of each region cluster to obtain the dominant color of each region color; the determining subunit is configured to divide the security area and the alarm area in the status area according to the dominant color of each area and the configuration information, specifically, calculate the euclidean distance between the dominant color value of the preset alarm area and the dominant color value of each status area according to the configuration file, and select the minimum euclidean distance to determine the security area and the alarm area.
In one embodiment, the pointer information obtaining unit 307 includes a pointer area calculating subunit, a representative color calculating subunit, and an integrating subunit.
The pointer area calculating subunit is used for calculating the area corresponding to the pointer area dividing result so as to obtain the pointer area; a representative color calculating subunit, configured to calculate an average value of all pixel points of each channel of RGB in the area corresponding to the pointer area division result, so as to obtain a pointer representative color; and the integration subunit is used for integrating the pointer area and the pointer representative color to obtain pointer information.
It should be noted that, as will be clearly understood by those skilled in the art, the specific implementation process of the multi-pointer instrument detection device 300 and each unit may refer to the corresponding description in the foregoing method embodiments, and for convenience and brevity of description, the description is omitted here.
The multi-pointer meter detection apparatus 300 described above may be implemented in the form of a computer program that can run on a computer device as shown in fig. 7.
Referring to fig. 7, fig. 7 is a schematic block diagram of a computer device according to an embodiment of the present application. The computer device 500 may be a server, where the server may be a stand-alone server or may be a server cluster formed by a plurality of servers.
With reference to FIG. 7, the computer device 500 includes a processor 502, memory, and a network interface 505 connected by a system bus 501, where the memory may include a non-volatile storage medium 503 and an internal memory 504.
The non-volatile storage medium 503 may store an operating system 5031 and a computer program 5032. The computer program 5032 includes program instructions that, when executed, cause the processor 502 to perform a multi-pointer meter detection method.
The processor 502 is used to provide computing and control capabilities to support the operation of the overall computer device 500.
The internal memory 504 provides an environment for the execution of a computer program 5032 in the non-volatile storage medium 503, which computer program 5032, when executed by the processor 502, causes the processor 502 to perform a multi-pointer meter detection method.
The network interface 505 is used for network communication with other devices. It will be appreciated by those skilled in the art that the architecture shown in fig. 7 is merely a block diagram of some of the architecture relevant to the present inventive arrangements and is not limiting of the computer device 500 to which the present inventive arrangements may be implemented, as a particular computer device 500 may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
Wherein the processor 502 is configured to execute a computer program 5032 stored in a memory to implement the steps of:
Acquiring an image of the instrument to obtain an initial image; inputting the initial image into a target detection model for recognition to obtain a target detection result; checking the target detection result to obtain checked information; cutting the initial image according to the target detection result and the checked information to obtain a high-definition pointer instrument picture; inputting the high-definition pointer instrument picture into an instance segmentation model to carry out instance segmentation so as to obtain different state region segmentation results, pointer region segmentation results and pointer instrument panel reflection classification results; carrying out dominant color recognition on the segmentation results of the different state areas, and dividing a safety area and an alarm area in the state areas according to configuration information; acquiring pointer information according to the pointer region segmentation result; judging whether the pointer is in the alarm area according to the pointer area and the alarm area; if the pointers are in the alarm area, generating alarm information of each pointer, and feeding back the alarm information to the terminal.
The target detection model is obtained by training CENTERNET models by taking a plurality of images with instrument coordinates and class labels as sample sets; the example segmentation model is obtained by training DetectoRS a model by taking a plurality of pictures with masks of different state areas, different state area category labels, pointer area masks, pointer area categories and pointer instrument panel reflection category labels as a sample set.
In one embodiment, when the step of verifying the target detection result to obtain the verified information is implemented by the processor 502, the following steps are specifically implemented:
Judging whether the confidence coefficient of the target detection result exceeds a set confidence coefficient threshold value; if the confidence coefficient of the target detection result does not exceed the set confidence coefficient threshold value, fine-tuning the sampling position, and executing the acquired image of the instrument to obtain an initial image; if the confidence coefficient of the target detection result exceeds the set confidence coefficient threshold value, screening the target detection result with the confidence coefficient higher than the set confidence coefficient threshold value to obtain a recognition object; judging whether overlapping contents exist in the identification object; if the overlapped contents exist in the identification objects, selecting the object with the highest confidence coefficient to obtain checked information; if the identification object does not have overlapped contents, judging whether the aspect ratio of the identification object is within a set aspect ratio threshold value range and whether the ratio of the detection frame formed by the identification object to the initial image is consistent with the range of the set area ratio threshold value; if the aspect ratio of the identification object is within the set threshold range of the aspect ratio and the initial image occupation ratio of the detection frame formed by the identification object is in accordance with the set threshold range of the area occupation ratio, the identification object is verified information; if the aspect ratio of the identification object is not within the set threshold range of the aspect ratio or the ratio of the detection frame occupied by the identification object to the initial image is not in accordance with the set threshold range of the area occupied by the identification object, fine-tuning the sampling position and executing the image of the acquisition instrument to obtain the initial image.
In an embodiment, when the processor 502 performs the step of performing dominant color recognition on the segmentation result of the different status areas and dividing the security area and the alarm area in the status area according to the configuration information, the following steps are specifically implemented:
Converting the segmentation results of the different state areas into HSV color space to obtain color characterization values of the different areas; performing color clustering on each region through DBScan algorithm, and selecting a color value corresponding to the center of the largest cluster of each region cluster to obtain the main color of each region color; and dividing a security area and an alarm area in the status area according to the dominant color of each area and the configuration information.
In one embodiment, when the step of dividing the security area and the alarm area in the status area according to the dominant color of each area and the configuration information is implemented by the processor 502, the following steps are specifically implemented:
and calculating Euclidean distance between the preset alarm area primary color value and the primary color value of each state area according to the configuration file, and selecting the minimum Euclidean distance to determine the safety area and the alarm area.
In one embodiment, when the step of obtaining pointer information according to the pointer region segmentation result is implemented by the processor 502, the following steps are specifically implemented:
calculating the area corresponding to the dividing result of the pointer area to obtain the pointer area; calculating the average value of all pixel points of each RGB channel in the region corresponding to the pointer region segmentation result to obtain pointer representative color; and integrating the pointer area and the pointer representative color to obtain pointer information.
In an embodiment, when the processor 502 implements the example segmentation model by using a plurality of masks with different status areas, different status area category labels, pointer area masks, pointer area categories, and pictures of pointer dashboard reflection category labels as the sample set training DetectoRS model, the following steps are specifically implemented:
And training DetectoRS a model after carrying out augmentation and normalization operations on a plurality of pictures with masks in different state areas, area category labels in different states, pointer area masks, pointer area category labels and pointer instrument panel reflection category labels so as to obtain an example segmentation model.
It should be appreciated that in embodiments of the present application, the Processor 502 may be a central processing unit (Central Processing Unit, CPU), the Processor 502 may also be other general purpose processors, digital signal processors (DIGITAL SIGNAL processors, DSPs), application SPECIFIC INTEGRATED Circuits (ASICs), off-the-shelf Programmable gate arrays (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. Wherein the general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
Those skilled in the art will appreciate that all or part of the flow in a method embodying the above described embodiments may be accomplished by computer programs instructing the relevant hardware. The computer program comprises program instructions, and the computer program can be stored in a storage medium, which is a computer readable storage medium. The program instructions are executed by at least one processor in the computer system to implement the flow steps of the embodiments of the method described above.
Accordingly, the present invention also provides a storage medium. The storage medium may be a computer readable storage medium. The storage medium stores a computer program which, when executed by a processor, causes the processor to perform the steps of:
Acquiring an image of the instrument to obtain an initial image; inputting the initial image into a target detection model for recognition to obtain a target detection result; checking the target detection result to obtain checked information; cutting the initial image according to the target detection result and the checked information to obtain a high-definition pointer instrument picture; inputting the high-definition pointer instrument picture into an instance segmentation model to carry out instance segmentation so as to obtain different state region segmentation results, pointer region segmentation results and pointer instrument panel reflection classification results; carrying out dominant color recognition on the segmentation results of the different state areas, and dividing a safety area and an alarm area in the state areas according to configuration information; acquiring pointer information according to the pointer region segmentation result; judging whether the pointer is in the alarm area according to the pointer area and the alarm area; if the pointers are in the alarm area, generating alarm information of each pointer, and feeding back the alarm information to the terminal.
The target detection model is obtained by training CENTERNET models by taking a plurality of images with instrument coordinates and class labels as sample sets; the example segmentation model is obtained by training DetectoRS a model by taking a plurality of pictures with masks of different state areas, different state area category labels, pointer area masks, pointer area categories and pointer instrument panel reflection category labels as a sample set.
In one embodiment, when the processor executes the computer program to implement the step of verifying the target detection result to obtain verified information, the processor specifically implements the following steps:
Judging whether the confidence coefficient of the target detection result exceeds a set confidence coefficient threshold value; if the confidence coefficient of the target detection result does not exceed the set confidence coefficient threshold value, fine-tuning the sampling position, and executing the acquired image of the instrument to obtain an initial image; if the confidence coefficient of the target detection result exceeds the set confidence coefficient threshold value, screening the target detection result with the confidence coefficient higher than the set confidence coefficient threshold value to obtain a recognition object; judging whether overlapping contents exist in the identification object; if the overlapped contents exist in the identification objects, selecting the object with the highest confidence coefficient to obtain checked information; if the identification object does not have overlapped contents, judging whether the aspect ratio of the identification object is within a set aspect ratio threshold value range and whether the ratio of the detection frame formed by the identification object to the initial image is consistent with the range of the set area ratio threshold value; if the aspect ratio of the identification object is within the set threshold range of the aspect ratio and the initial image occupation ratio of the detection frame formed by the identification object is in accordance with the set threshold range of the area occupation ratio, the identification object is verified information; if the aspect ratio of the identification object is not within the set threshold range of the aspect ratio or the ratio of the detection frame occupied by the identification object to the initial image is not in accordance with the set threshold range of the area occupied by the identification object, fine-tuning the sampling position and executing the image of the acquisition instrument to obtain the initial image.
In one embodiment, when the processor executes the computer program to implement the step of performing dominant color recognition on the segmentation results of the different status areas and dividing the security area and the alarm area in the status area according to the configuration information, the following steps are specifically implemented:
Converting the segmentation results of the different state areas into HSV color space to obtain color characterization values of the different areas; performing color clustering on each region through DBScan algorithm, and selecting a color value corresponding to the center of the largest cluster of each region cluster to obtain the main color of each region color; and dividing a security area and an alarm area in the status area according to the dominant color of each area and the configuration information.
In one embodiment, the processor, when executing the computer program to implement the step of dividing the security area and the alarm area in the status area according to the dominant color of each area and the configuration information, specifically implements the following steps:
and calculating Euclidean distance between the preset alarm area primary color value and the primary color value of each state area according to the configuration file, and selecting the minimum Euclidean distance to determine the safety area and the alarm area.
In one embodiment, when the processor executes the computer program to implement the step of acquiring pointer information according to the pointer region segmentation result, the following steps are specifically implemented:
calculating the area corresponding to the dividing result of the pointer area to obtain the pointer area; calculating the average value of all pixel points of each RGB channel in the region corresponding to the pointer region segmentation result to obtain pointer representative color; and integrating the pointer area and the pointer representative color to obtain pointer information.
In one embodiment, when the processor executes the computer program to implement the instance segmentation model by taking a plurality of pictures with masks of different state areas, different state area category labels, pointer area masks, pointer area categories, and pointer dashboard reflection category labels as the sample set training DetectoRS model, the method specifically includes the following steps:
And training DetectoRS a model after carrying out augmentation and normalization operations on a plurality of pictures with masks in different state areas, area category labels in different states, pointer area masks, pointer area category labels and pointer instrument panel reflection category labels so as to obtain an example segmentation model.
The storage medium may be a U-disk, a removable hard disk, a Read-Only Memory (ROM), a magnetic disk, or an optical disk, or other various computer-readable storage media that can store program codes.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied in electronic hardware, in computer software, or in a combination of the two, and that the elements and steps of the examples have been generally described in terms of function in the foregoing description to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the several embodiments provided by the present invention, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the device embodiments described above are merely illustrative. For example, the division of each unit is only one logic function division, and there may be another division manner in actual implementation. For example, multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed.
The steps in the method of the embodiment of the invention can be sequentially adjusted, combined and deleted according to actual needs. The units in the device of the embodiment of the invention can be combined, divided and deleted according to actual needs. In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit.
The integrated unit may be stored in a storage medium if implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present invention is essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a terminal, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention.
While the invention has been described with reference to certain preferred embodiments, it will be understood by those skilled in the art that various changes and substitutions of equivalents may be made and equivalents will be apparent to those skilled in the art without departing from the scope of the invention. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (8)

1. The multi-pointer instrument detection method is characterized by comprising the following steps:
acquiring an image of the instrument to obtain an initial image;
inputting the initial image into a target detection model for recognition to obtain a target detection result;
Checking the target detection result to obtain checked information;
Cutting the initial image according to the target detection result and the checked information to obtain a high-definition pointer instrument picture;
Inputting the high-definition pointer instrument picture into an instance segmentation model to carry out instance segmentation so as to obtain different state region segmentation results, pointer region segmentation results and pointer instrument panel reflection classification results; the pointer instrument panel reflection classification result refers to judging whether the pointer instrument panel reflects light or not, and if the pointer instrument panel does not have influence of shadow factors, entering into main color recognition;
carrying out dominant color recognition on the segmentation results of the different state areas, and dividing a safety area and an alarm area in the state areas according to configuration information;
acquiring pointer information according to the pointer region segmentation result;
judging whether the pointer is in the alarm area according to the pointer area and the alarm area;
If the pointers are in the alarm area, generating alarm information of each pointer, and feeding back the alarm information to the terminal;
The target detection model is obtained by training CENTERNET models by taking a plurality of images with instrument coordinates and class labels as sample sets;
the example segmentation model is obtained by taking a plurality of pictures with masks of different state areas, different state area category labels, pointer area masks, pointer area categories and pointer instrument panel reflection category labels as a sample set training DetectoRS model;
The verifying the target detection result to obtain verified information includes:
judging whether the confidence coefficient of the target detection result exceeds a set confidence coefficient threshold value;
If the confidence coefficient of the target detection result does not exceed the set confidence coefficient threshold value, fine-tuning the sampling position, and executing the acquired image of the instrument to obtain an initial image;
If the confidence coefficient of the target detection result exceeds the set confidence coefficient threshold value, screening the target detection result with the confidence coefficient higher than the set confidence coefficient threshold value to obtain a recognition object;
judging whether overlapping contents exist in the identification object;
if the overlapped contents exist in the identification objects, selecting the object with the highest confidence coefficient to obtain checked information;
If the identification object does not have overlapped contents, judging whether the aspect ratio of the identification object is within a set aspect ratio threshold value range and whether the ratio of the detection frame formed by the identification object to the initial image is consistent with the range of the set area ratio threshold value;
if the aspect ratio of the identification object is within the set threshold range of the aspect ratio and the initial image occupation ratio of the detection frame formed by the identification object is in accordance with the set threshold range of the area occupation ratio, the identification object is verified information;
If the aspect ratio of the identification object is not within the set threshold range of the aspect ratio or the ratio of the detection frame occupied by the identification object to the initial image is not in accordance with the set threshold range of the area occupied by the identification object, fine-tuning the sampling position and executing the image of the acquisition instrument to obtain the initial image.
2. The multi-pointer instrument detection method according to claim 1, wherein the main color recognition is performed on the division results of the different status areas, and the security area and the alarm area in the status areas are divided according to the configuration information, and the method comprises the steps of:
converting the segmentation results of the different state areas into HSV color space to obtain color characterization values of the different areas;
Performing color clustering on each region through DBScan algorithm, and selecting a color value corresponding to the center of the largest cluster of each region cluster to obtain the main color of each region color;
and dividing a security area and an alarm area in the status area according to the dominant color of each area and the configuration information.
3. The multi-pointer meter detecting method according to claim 2, wherein said dividing the security area and the alarm area in the status area according to the dominant color of each area and the configuration information comprises:
and calculating Euclidean distance between the preset alarm area primary color value and the primary color value of each state area according to the configuration file, and selecting the minimum Euclidean distance to determine the safety area and the alarm area.
4. The multi-pointer instrument detection method according to claim 1, wherein the obtaining pointer information according to the pointer region division result comprises:
calculating the area corresponding to the dividing result of the pointer area to obtain the pointer area;
Calculating the average value of all pixel points of each RGB channel in the region corresponding to the pointer region segmentation result to obtain pointer representative color;
and integrating the pointer area and the pointer representative color to obtain pointer information.
5. The method for detecting a multi-pointer instrument according to claim 1, wherein the example segmentation model is obtained by taking a plurality of pictures with masks of different state areas, different state area category labels, pointer area masks, pointer area categories and pointer instrument panel reflection category labels as a sample set training DetectoRS model, and comprises:
And training DetectoRS a model after carrying out augmentation and normalization operations on a plurality of pictures with masks in different state areas, area category labels in different states, pointer area masks, pointer area category labels and pointer instrument panel reflection category labels so as to obtain an example segmentation model.
6. Multi-pointer instrument detection device, its characterized in that includes:
an initial image acquisition unit for acquiring an image of the meter to obtain an initial image;
the target detection unit is used for inputting the initial image into the target detection model for recognition so as to obtain a target detection result;
The verification unit is used for verifying the target detection result to obtain verified information;
the clipping unit is used for clipping the initial image according to the target detection result and the checked information so as to obtain a high-definition pointer instrument picture;
the example segmentation unit is used for inputting the high-definition pointer instrument picture into an example segmentation model to carry out example segmentation so as to obtain different state region segmentation results, pointer region segmentation results and pointer instrument panel reflection classification results; the pointer instrument panel reflection classification result refers to judging whether the pointer instrument panel reflects light or not, and if the pointer instrument panel does not have influence of shadow factors, entering into main color recognition;
the state area determining unit is used for carrying out dominant color recognition on the segmentation results of the different state areas and dividing a safety area and an alarm area in the state areas according to the configuration information;
A pointer information acquisition unit for acquiring pointer information according to the pointer region division result;
the judging unit is used for judging whether the pointer is in the alarm area according to the pointer area and the alarm area;
The alarm information generating unit is used for generating alarm information of each pointer if the pointer is in the alarm area and feeding back the alarm information to the terminal;
The verification unit includes:
The confidence judging subunit is used for judging whether the confidence of the target detection result exceeds a set confidence threshold value; if the confidence coefficient of the target detection result does not exceed the set confidence coefficient threshold value, fine-tuning the sampling position and executing the acquired image of the instrument to obtain an initial image;
the screening subunit is used for screening the target detection result with the confidence coefficient higher than the set confidence coefficient threshold value to obtain the identification object if the confidence coefficient of the target detection result exceeds the set confidence coefficient threshold value;
an overlap judging subunit, configured to judge whether overlapping content exists in the identification object; if the overlapped contents exist in the identification objects, selecting the object with the highest confidence coefficient to obtain checked information;
A proportion judging subunit, configured to judge whether an aspect ratio of the identified object is within a set aspect ratio threshold range and whether a detection frame formed by the identified object occupies an initial image proportion within the set area ratio threshold range if there is no overlapping content in the identified object; if the aspect ratio of the identification object is within the set threshold range of the aspect ratio and the initial image occupation ratio of the detection frame formed by the identification object is in accordance with the set threshold range of the area occupation ratio, the identification object is verified information; if the aspect ratio of the identification object is not within the set threshold range of the aspect ratio or the ratio of the detection frame occupied by the identification object to the initial image is not in accordance with the set threshold range of the area occupied by the identification object, fine-tuning the sampling position and executing the image of the acquisition instrument to obtain the initial image.
7. A computer device, characterized in that it comprises a memory on which a computer program is stored and a processor which, when executing the computer program, implements the method according to any of claims 1-5.
8. A storage medium storing a computer program which, when executed by a processor, performs the method of any one of claims 1 to 5.
CN202011018641.1A 2020-09-24 2020-09-24 Multi-pointer instrument detection method and device, computer equipment and storage medium Active CN112115898B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011018641.1A CN112115898B (en) 2020-09-24 2020-09-24 Multi-pointer instrument detection method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011018641.1A CN112115898B (en) 2020-09-24 2020-09-24 Multi-pointer instrument detection method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112115898A CN112115898A (en) 2020-12-22
CN112115898B true CN112115898B (en) 2024-07-02

Family

ID=73801638

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011018641.1A Active CN112115898B (en) 2020-09-24 2020-09-24 Multi-pointer instrument detection method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112115898B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112784854B (en) * 2020-12-30 2023-07-14 成都云盯科技有限公司 Clothing color segmentation extraction method, device and equipment based on mathematical statistics
CN113128353B (en) * 2021-03-26 2023-10-24 安徽大学 Emotion perception method and system oriented to natural man-machine interaction
CN113256624A (en) * 2021-06-29 2021-08-13 中移(上海)信息通信科技有限公司 Continuous casting round billet defect detection method and device, electronic equipment and readable storage medium
CN113947720B (en) * 2021-12-20 2022-05-20 广东科凯达智能机器人有限公司 Method for judging working state of density meter
CN114283413B (en) * 2021-12-22 2024-04-26 上海蒙帕智能科技股份有限公司 Method and system for identifying digital instrument readings in inspection scene
CN115980116B (en) * 2022-11-22 2023-07-14 宁波博信电器有限公司 High-temperature-resistant detection method and system for instrument panel, storage medium and intelligent terminal

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101660932A (en) * 2009-06-15 2010-03-03 浙江大学 Automatic calibration method of pointer type automobile meter
CN104008399A (en) * 2014-06-12 2014-08-27 哈尔滨工业大学 Instrument pointer jittering recognition method based on support vector machine during instrument detection

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107590498B (en) * 2017-09-27 2020-09-01 哈尔滨工业大学 Self-adaptive automobile instrument detection method based on character segmentation cascade two classifiers
CN111241947B (en) * 2019-12-31 2023-07-18 深圳奇迹智慧网络有限公司 Training method and device for target detection model, storage medium and computer equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101660932A (en) * 2009-06-15 2010-03-03 浙江大学 Automatic calibration method of pointer type automobile meter
CN104008399A (en) * 2014-06-12 2014-08-27 哈尔滨工业大学 Instrument pointer jittering recognition method based on support vector machine during instrument detection

Also Published As

Publication number Publication date
CN112115898A (en) 2020-12-22

Similar Documents

Publication Publication Date Title
CN112115898B (en) Multi-pointer instrument detection method and device, computer equipment and storage medium
CN112115897B (en) Multi-pointer instrument alarm detection method, device, computer equipment and storage medium
CN113269073B (en) Ship multi-target tracking method based on YOLO V5 algorithm
CN112115895B (en) Pointer type instrument reading identification method, pointer type instrument reading identification device, computer equipment and storage medium
CN106355188A (en) Image detection method and device
CN109858547A (en) A kind of object detection method and device based on BSSD
JP2015087903A (en) Apparatus and method for information processing
CN112819821B (en) Cell nucleus image detection method
US11748981B2 (en) Deep learning method for predicting patient response to a therapy
US11748975B2 (en) Method and device for optimizing object-class model based on neural network
CN113095444B (en) Image labeling method, device and storage medium
CN112633255A (en) Target detection method, device and equipment
CN110298410A (en) Weak target detection method and device in soft image based on deep learning
CN114821551A (en) Method, apparatus and storage medium for legacy detection and model training
CN111414930B (en) Deep learning model training method and device, electronic equipment and storage medium
CN110866931A (en) Image segmentation model training method and classification-based enhanced image segmentation method
CN112115896B (en) Instrument panel pointer reading prediction method and device, computer equipment and storage medium
CN113065454A (en) High-altitude parabolic target identification and comparison method and device
CN112948765A (en) Method and apparatus for determining the vertical position of a horizontally extending interface between a first component and a second component
CN113887455B (en) Face mask detection system and method based on improved FCOS
CN114240929A (en) Color difference detection method and device
CN113780462A (en) Vehicle detection network establishment method based on unmanned aerial vehicle aerial image and application thereof
US11244443B2 (en) Examination apparatus, examination method, recording medium storing an examination program, learning apparatus, learning method, and recording medium storing a learning program
CN117649415B (en) Cell balance analysis method based on optical flow diagram detection
CN111553418B (en) Method and device for detecting neuron reconstruction errors and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant