CN117894031A - Method for recognizing and photographing electric energy meter and readable storage medium - Google Patents

Method for recognizing and photographing electric energy meter and readable storage medium Download PDF

Info

Publication number
CN117894031A
CN117894031A CN202410189115.3A CN202410189115A CN117894031A CN 117894031 A CN117894031 A CN 117894031A CN 202410189115 A CN202410189115 A CN 202410189115A CN 117894031 A CN117894031 A CN 117894031A
Authority
CN
China
Prior art keywords
picture
identification
identifying
qualified
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410189115.3A
Other languages
Chinese (zh)
Inventor
滕铁军
王红运
陈冰
刘畅
付珊珊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Hezhong Weiqi Technology Co ltd
Original Assignee
Beijing Hezhong Weiqi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Hezhong Weiqi Technology Co ltd filed Critical Beijing Hezhong Weiqi Technology Co ltd
Priority to CN202410189115.3A priority Critical patent/CN117894031A/en
Publication of CN117894031A publication Critical patent/CN117894031A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Devices (AREA)

Abstract

The disclosure discloses a method for identifying and photographing an electric energy meter and a readable storage medium, wherein the method comprises the steps of obtaining a picture of a photographed object; wherein the picture comprises an identification area; and identifying the identification area to determine the shot object corresponding to the picture. According to the method provided by the invention, the operation flow for recording the electric quantity data of the electric energy meter can be optimized, and the consistency of the acquired pictures and the acquired objects and the credibility of the recorded data are ensured. Further, the method disclosed by the disclosure can also identify the quality of the picture and the number of the shot objects in the picture so as to improve the credibility of the picture.

Description

Method for recognizing and photographing electric energy meter and readable storage medium
Technical Field
The present disclosure relates generally to the field of data identification, analysis, and processing. More particularly, the present disclosure relates to a method of electric energy meter identification photographing and a readable storage medium.
Background
Along with the social development, the development concept of safety, reliability, economy, cleanness and greenness is permeated into various industries, and domestic power enterprises continuously strengthen technical innovation, promote equipment upgrading and deepen system mechanism innovation so as to promote the intelligent and digital level of a power grid and provide strong power guarantee for the development of China economy and society.
In the process of delivering power to thousands of households by power enterprises, various metering device settings are needed. The electric energy meter is used as a most common basic instrument, and the operation flow spread around the electric energy meter occupies a large area; such as new installation of the electric energy meter, sales, faults of the electric energy meter, periodic rotation, capacity increase, change and the like. Further, in the foregoing operation flow, it is the most important that the electric company monitors the electricity consumption behavior of the user according to the electricity consumption data recorded by the electric energy meter, and is also the key for improving the level of intellectualization and digitization of the operation.
In a traditional application scenario, workers can identify the bar code of the electric energy meter in a bar code identification mode to confirm electric energy meter equipment needing to be operated. Then, photographing the electric energy meter, and recording the current total electric quantity displayed on the screen of the electric energy meter in a manual mode and keeping the current total electric quantity.
However, the method adopted in the application scenario described above may have some problems as follows: firstly, barcode recognition and photographing are required to be carried out separately, the operation is complicated, the workload is increased, and meanwhile, the condition that the screen display number is manually input in error cannot be avoided; secondly, on-site operators separately perform operations of bar code identification, photo taking and photo collecting of electric energy meter equipment and manual electric energy meter electric quantity registration, and a space for data counterfeiting exists, so that the current objects of code scanning, photo taking and electric energy registration may not be the same electric energy meter equipment; thirdly, because the electric energy meter equipment in the photo is not checked after photographing, or the operation of part of field operators is not standard, the quality of the collected photo is poor (for example, the photo is too dark, overexposed, fuzzy, the electric energy meter equipment does not exist in the photo, the electric energy meter is not photographed fully, and the like), the workload of auditors is large, the auditing efficiency is low, and the resource waste is also caused by later-stage supplementary photographing.
In view of the foregoing, there is a need to provide an innovative method for identifying and photographing an electric energy meter and a readable storage medium thereof, so as to optimize the operation flow of recording the electric energy data of the electric energy meter, save the operation resources and improve the reliability of the reserved photos.
Disclosure of Invention
To address at least one or more of the technical problems mentioned above, the present disclosure proposes, in various aspects, a method of electric energy meter identification photographing and a readable storage medium. The method can effectively optimize the operation flow of recording the electric energy conversion data, save the operation resources and improve the credibility of the reserved pictures.
In a first aspect, the present disclosure provides a method for identifying a photograph of an electric energy meter, comprising: acquiring a picture of the shot object; wherein the picture comprises an identification area; and identifying the identification area to determine the shot object corresponding to the picture.
In a second aspect, the present disclosure provides a computer-readable storage medium having stored thereon computer program instructions for execution by one or more processors to perform the operations of the method described in the first aspect of the present disclosure.
According to the method for identifying and photographing the electric energy meter and the readable storage medium, the embodiment of the disclosure guarantees consistency of the collected objects, reliability of the input data, optimizes the operation flow and shortens the operation time in the following manner: and acquiring a picture comprising the identification area of the electric energy meter, and identifying the identification area in the picture to determine the electric energy meter corresponding to the picture. Further, the embodiment of the disclosure can identify the quality of the pictures and the number of the electric energy meters in the pictures, so as to screen the pictures which are qualified in quality and comprise the electric energy meters to be acquired, and improve the credibility of the stored pictures. Furthermore, the embodiment of the disclosure can identify the screen display content of the electric energy meter in the picture, and store the identified content after corresponding to the shot electric energy meter, so that the operation resource for recording the electric energy meter electric quantity data is greatly saved.
Drawings
The above, as well as additional purposes, features, and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description when read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
FIG. 1a is a schematic flow chart of a method for identifying a photograph of an electric energy meter according to an embodiment of the disclosure;
FIG. 1b is a schematic diagram of a photo template of a preview screen according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow diagram of a method of identifying picture quality in accordance with an embodiment of the present disclosure;
FIG. 3 is a flow chart of a method of identifying the number of objects in a picture according to an embodiment of the disclosure;
FIG. 4a shows a schematic diagram of a picture of a subject taken in accordance with an embodiment of the present disclosure;
FIG. 4b illustrates a flow diagram of a method of identifying photo content in an embodiment of the disclosure;
fig. 5 is a flowchart illustrating a method for identifying a photo of an electric meter according to an embodiment of the disclosure.
Detailed Description
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the disclosure. Based on the embodiments in this disclosure, all other embodiments that may be made by those skilled in the art without the inventive effort are within the scope of the present disclosure.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present disclosure is for the purpose of describing particular embodiments only, and is not intended to be limiting of the disclosure. As used in the specification and claims of this disclosure, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should be further understood that the term "and/or" as used in the present disclosure and claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Specific embodiments of the present disclosure are described in detail below with reference to the accompanying drawings.
Fig. 1a is a schematic flow chart of a method for identifying and photographing an electric energy meter according to an embodiment of the disclosure, and the method of the disclosure will be further described with reference to fig. 1 a.
As shown in fig. 1a, in step S110, a picture of a subject is acquired; wherein the picture includes an identification area. The photographed object includes an identification that can be used to distinguish the same class of photographed objects. Therefore, the acquired picture of the photographed object may include the identification area. The shot object can be an electric energy meter or other objects suitable for the method of the disclosure; the identification may be a bar code, a two-dimensional code, a digital symbol, or the like, which may be used to distinguish.
It will be appreciated that the subject may be standard with a high degree of repeatability, and thus the operator may be guided to align the subject by the photographing template. In one embodiment, when the shot object is shot to acquire a picture of the shot object, a live preview screen can be displayed in the shot page. The preview picture comprises a photographing template which is used for guiding operators to align photographed objects according to specifications; when the shot object in the preview picture is aligned with the shooting template, the picture of the shot object is automatically acquired. Fig. 1b shows a schematic diagram of a photographing template of a preview screen according to an embodiment of the disclosure.
As shown in fig. 1b, the photographing template of the preview screen may include a preview screen bounding box 1, a photographed object bounding box 2, and an identification box 3. The preview screen bounding box 1 can be used to prompt the operator for the optimal distance to take. For example, when the boundary of the preview screen coincides with the preview screen boundary box 1, it is considered that the distance between the camera and the subject at this time is the optimal distance for photographing the subject. The shot object bounding box 2 may be used to prompt the operator for the optimal angle to shoot. For example, when the subject bounding box 2 coincides with the outermost side of the subject in the preview screen, the angle of the camera at this time is considered to be the optimal angle for photographing the subject. The identification frame 3 may be used to prompt the operator whether the identification area in the preview screen is complete. For example, when the boundary of the identification area in the preview screen coincides with the identification frame 3, the identification area in the preview screen is considered to be complete. It will be appreciated that when the subject is aligned, the subject may be aligned according to any one or all of the preview screen bounding box 1, the subject bounding box 2, and the identification frame 3 described above. After the boundaries of the preview picture boundary frame 1 and the preview picture are aligned, the positions of the shot object boundary frame 2 and the identification frame can be adjusted according to the real-time picture. It will also be appreciated that, ideally, after the preview screen bounding box 1, the object bounding box 2 and the identification frame 3 are aligned, the identification frame 3 may be an identification area in a picture of the object, which may be referred to as the identification frame 2 in the preview screen of fig. 1 b.
Then, referring back to fig. 1a, next, in step S120, the identification area is identified to determine the photographed object corresponding to the picture. The identification on each photographed object is unique, and the picture acquired in step S110 can be associated with a certain photographed object by identifying the identification area in the photographed object. When the picture corresponds to a certain shot object, the picture can be marked for distinction; alternatively, a database may be established to identify different photographed objects according to the identification areas, and then a plurality of data of the photographed objects may be classified and stored. Wherein the plurality of data may include other data identified from the picture.
Specifically, the identification of the identification area may be, for example, cutting the position of the identification area in the picture obtained in step S110, then resizing the cut identification area, and finally identifying according to the resized identification area. As described above, the identifier may be a bar code, a two-dimensional code, a digital symbol, etc., and the technology for identifying the identifier area may also be adjusted according to the actual application, which is not limited in this disclosure. Specific technical details for identifying the identification area are known to those skilled in the art and are not described here in detail. Further, the position of the identification area in the picture may be determined according to the position of the identification frame in the preview screen described above. In one embodiment, the mark may be a C bar code, and after the mark area is intercepted, the mark area is resized until the long side is greater than 512 pixels; then, the zbar library (C bar code reader library) is used for identifying the two-dimensional code area after the adjustment, so as to determine the shot object corresponding to the picture.
It is considered that when a picture of a subject is acquired, an operator may not align the subject strictly in terms of prompts, resulting in irregular or incomplete shape of the identification area in the acquired picture. Further, when the identification area in the picture is intercepted, the image of the identification area may be lost, so that the photographed object cannot be identified according to the identification area. In one implementation scenario, it is desirable to overcome the above-described possible problems to improve recognition efficiency. Specifically, identifying the picture according to the identification area includes: expanding the identification area to obtain an expanded area comprising the identification area; and identifying the extension area to determine the shot object corresponding to the picture. It can be understood that the expansion of the identification area is performed, that is, after the position of the identification area is determined, the area of the identification area is expanded according to a certain proportion to obtain an expansion area; then, the extended area is identified to determine the photographed object to which the picture corresponds. Preferably, the proportion of expansion may be 30% of the side length of each side of the identification area extending in the direction of the corresponding side of the direction, so as to ensure that the identification area is complete and does not include other interference information.
It is also contemplated that there may be updates, maintenance, etc. to the identification on the object to be photographed, resulting in the identification on the object to be photographed possibly including a plurality. In one implementation scenario, it is desirable that when the photographed object includes a plurality of markers, a valid marker therein can be recognized to improve accuracy of recognition. Specifically, the identification area includes a plurality of identification areas, and identifying the picture according to the identification areas further includes: identifying a plurality of identification areas to determine the identification area with the largest area occupation ratio; and identifying the identification area with the largest area ratio to determine the shot object corresponding to the picture. It will be appreciated that when updating and maintaining the identity on the object to be photographed, the operator will typically overwrite the old identity with the new identity to ensure that the identity on the object to be photographed is unique. Therefore, when it is recognized that the subject includes a plurality of markers, the marker having the largest area ratio among the plurality of markers can be recognized as the effective marker.
In one implementation scenario, after obtaining a picture of a photographed object and determining the photographed object corresponding to the picture, it is desirable to identify the quality of the picture, so as to improve the credibility of the picture, so as to facilitate subsequent use of the picture. Specifically, the pictures further include qualified pictures, and the pictures are identified to determine whether the pictures are qualified pictures: if the picture is not the qualified picture, acquiring the picture of the shot object; and if the picture is a qualified picture, determining the qualified picture as a first picture to be processed. Fig. 2 is a flow chart illustrating a method for identifying picture quality according to an embodiment of the present disclosure, which will be further described with reference to fig. 2.
As shown in fig. 2, in step S210, a picture of a subject is acquired; wherein the picture includes an identification area. Next, step S220 is performed to identify the identification area, so as to determine the photographed object corresponding to the picture. It is to be understood that the step S210 and the step S220 shown in fig. 2 may be the step S110 and the step S120 shown in fig. 1a, and the same contents are not repeated herein.
Next, step S230 is performed to identify the picture to determine whether the picture is a qualified picture. When a picture of a photographed object is acquired, there may be a too poor quality of the acquired picture of the photographed object due to factors such as too strong and weak ambient light, lens shake, and the like. Therefore, the picture needs to be identified to determine whether the picture is a qualified picture, so that the picture can be conveniently utilized later.
If the picture is a qualified picture, step S250 is entered to determine the picture as a first picture to be processed. The first picture to be processed may be saved directly as a result or may be used as material for further processing later, as the disclosure is not limited in this respect. If the picture is not a qualified picture, the process proceeds to step S210, and the picture of the photographed object is re-acquired until the picture is a qualified picture.
It will be appreciated that in image processing, an image may generally be described by RGB (red, green, blue, red, green, blue) values, i.e., in terms of the component values of red, green, blue (value ranges [0,255 ]) in each pixel of the image. To facilitate processing and analysis of the picture data, the picture may be normalized, i.e. the RGB values of the picture need to be transformed.
In one embodiment, normalizing the picture may be processed by the following equation (1):
wherein R, G, B is the pixel value of the picture of the shot object, and the value range is [0,255 ]; w and h respectively represent the width and height of the picture of the shot object; r, g and b are RGB pixel values after normalization processing is carried out on the picture; 127.5f, 127.5 is the value of 127.5f, representing the mean and variance of the RGB pixel values; f is a float type suffix to indicate that the data is a single precision floating point number. And (3) normalizing the picture of the shot object according to the formula (1) to obtain normalized data with the pixel value range of [0,1 ].
It can also be appreciated that determining whether a picture is a qualified picture can identify the picture and/or normalized data of the picture according to an algorithm model; the algorithm model can process and analyze the input data to realize specific functions. For example, the picture and/or normalized data of the picture is identified according to an algorithm model to determine whether the picture is too dark, overexposed, or unclear. When the picture is too dark, overexposed or unclear, determining that the picture is not a qualified picture; otherwise, determining that the picture is a qualified picture.
In another embodiment, it may be determined whether the picture is over-darkened and over-exposed according to an over-darkened and over-exposed algorithm as described below:
and (3) performing single-channel gray scale conversion on the picture according to the formula (2):
Gray=(0.299×R+0.587×G+0.114×B) (2)
wherein Gray is the Gray of each pixel after Gray conversion; r, G, B are the red, green, blue color values, respectively, of each pixel in the picture. According to the formula (2), pixels containing red, green and blue can be weighted and summed according to the visual characteristics of human eyes, and the pixel value of the gray image of the picture is obtained.
It can then be determined whether the picture is too dark according to equation (3):
wherein P is v (x, y) is a pixel value of any one pixel in the gray-scale image subjected to gray-scale conversion; p (P) d Is the proportion of the excessively dark region. Identifying the picture according to the formula (3), when P d At > 0.5, the picture corresponding to the grayscale image is considered to be too dark.
Then, whether the picture is overexposed or not can be judged according to the formula (4):
wherein P is v (x, y) is a pixel value of any one pixel in the gray-scale image subjected to gray-scale conversion; p (P) l
Is the proportion of the excessively dark region. Identifying the picture according to the formula (4), when P l At > 0.45, the picture corresponding to the gray image is considered to be overexposed.
In yet another embodiment, whether the picture is clear may be identified based on a quality assessment model. The quality evaluation model can be obtained by training the following way: acquiring pictures of a plurality of shot objects and naming the pictures according to a uniform format; classifying all named pictures into clear and fuzzy by using labelImg (labelimage, open source visual image labeling tool); dividing the marked pictures into a training set, a verification set and a test set according to the ratio of 8:1:1; LCNeT (lightweight convolutional neural network ) algorithm based on paddlefilled (flying oar, open source distributed deep learning platform) trains a quality assessment model.
Further, the picture and/or normalized data of the picture is input into a quality assessment model. The quality assessment model may output a result according to the normalized data of the picture and/or the picture, the output result including reliability information of the picture. When the credibility information of the picture is larger than a given credibility threshold value, judging that the picture is clear; otherwise, the picture is judged to be blurred. The credibility threshold value can be used for indicating whether the picture is clear or fuzzy, and the specific value of the credibility threshold value can be adjusted according to the actual application condition.
It is considered that when a picture of a subject is acquired, the subject in the picture may be incomplete due to the operator's operation non-norms. In one implementation scenario, it is desirable to determine whether the shot objects are complete according to whether the number of shot objects in the picture is greater than 1, so as to improve the credibility of the picture and facilitate the subsequent use of the picture. Specifically, the picture is identified to determine the number of objects to be photographed in the picture: if the number of the shot objects in the picture is smaller than 1, acquiring the picture of the shot objects; and if the number of the shot objects in the picture is greater than or equal to 1, determining the picture as a second picture to be processed. Fig. 3 is a flowchart illustrating a method for identifying the number of objects in a picture according to an embodiment of the disclosure, which will be further described with reference to fig. 3.
As shown in fig. 3, in step S310, a picture of a subject is acquired; wherein the picture includes an identification area. Next, step S320 is performed to identify the identification area, so as to determine the photographed object corresponding to the picture. It is to be understood that the step S310 and the step S320 shown in fig. 3 may be the step S110 and the step S120 shown in fig. 1a, and the same contents are not repeated herein.
Next, step S330 is performed to identify the picture to determine the number of objects to be photographed in the picture. When obtaining the pictures of the shot objects, it is ideal that the number of shot objects included in the pictures is equal to 1, so as to ensure that the interference information in the pictures is less and the information required to be acquired is complete. However, since the operation of the operator is not standard, the shot objects in the picture may be incomplete, that is, the number of shot objects is less than 1; it is also possible to include a plurality of subjects, i.e., the number of subjects is greater than 1. It can be understood that the situation that the picture includes a plurality of shot objects is more easily perceived by the operator, and the operator can adjust the picture in time according to the preview picture. Accordingly, the present disclosure preferably recognizes a case where the number of objects to be photographed in a picture is less than 1, that is, determines whether the objects to be photographed in the picture are complete according to whether the number of objects to be photographed in the picture is less than 1.
If the number of the shot objects in the picture is not less than 1, it indicates that the shot objects in the picture are complete, and the process proceeds to step S350, where the picture is determined as a second generation processed picture. The second generation processed pictures can be directly saved as a result or can be used as materials for further processing. If the number of the shot objects in the picture is less than 1, the step S310 is performed to retrieve the picture of the shot objects until the number of the shot objects in the picture is not less than 1.
As previously described, the algorithm model may process, analyze, etc., the input data to achieve a particular function. For example, the picture and/or normalized data of the picture may be identified according to an algorithmic model to determine the number of objects captured in the picture. The method for normalizing the pictures may refer to the method described in the foregoing disclosure, and will not be described herein.
In one embodiment, the number of objects in the picture that are captured may be identified based on the object detection model. The target detection model can be obtained through training in the following way: acquiring pictures of a plurality of shot objects and naming the pictures according to a uniform format; dividing all named pictures into two categories of less than 1 and not less than 1 according to whether the number of shot objects in the pictures is less than 1 by using a labelImg (labelimage, open source visual image marking tool); dividing the marked pictures into a training set, a verification set and a test set according to the ratio of 8:1:1; training a target detection model based on a picot (picodetection, lightweight target detection network) algorithm of a paddlefilled (flying paddle, open source distributed deep learning platform); setting a learning rate, batch-size (batch size) and iteration times of training a target detection model; during training, evaluating the target detection model, evaluating every 3 generations, and taking an optimal accuracy model after training is finished; and testing the optimal accuracy model by using the test set until the accuracy of the target detection model reaches a set first accuracy threshold. The first accuracy threshold may be used to determine whether the accuracy of the target detection model meets the requirement of practical application, and the specific numerical value thereof may be adjusted according to the practical application situation.
Further, the picture and/or normalized data of the picture is input into the target detection model. The target detection model may output a result according to the input picture and/or normalized data of the picture, and the output result includes the type, the reliability and the detection frame position information of the photographed object. The detection frame position information comprises coordinates of upper left, upper right, lower left and lower right of the detection frame. And judging whether the number of shot objects in the picture is less than 1 according to the output information.
It is also contemplated that the object may include a display screen, and the acquired picture of the object may include a screen display area, i.e., an area where the display screen is located. In one implementation scenario, after a picture of a photographed object is acquired, it is desirable to identify a screen region in the picture to determine the screen content of the screen region. Specifically, the picture further comprises a screen display area, and the screen display area is identified: if the screen display content of the screen display area is identified, generating a final result of the picture according to the screen display content; and if the screen display content is not identified, acquiring a picture of the shot object. Fig. 4a shows a schematic diagram of a picture of a subject in an embodiment of the present disclosure.
As shown in fig. 4a, the picture of the photographed object includes a screen region P 1 And an identification area P 2 . Screen display region P 1 The display screen is used for indicating the position of the shot object in the picture; identification region P 2 For indicating the position of the identification of the object to be photographed in the picture. Specifically, the screen region P may be displayed according to the method of the present disclosure 1 And an identification area P 2 Identification is performed to achieve a specific function. The screen region and the identification region described below can refer to the screen region P in FIG. 4a 1 And an identification area P 2 The same contents are not repeated.
Fig. 4b is a flowchart illustrating a method for identifying content of a picture on screen according to an embodiment of the present disclosure, and the method according to the present disclosure for identifying the picture shown in fig. 4a will be further described with reference to fig. 4 b.
As shown in fig. 4b, in step S410, a picture of a subject is acquired; wherein the picture comprises an identification area and a screen display area. Next, step S420 is performed to identify the identification area, so as to determine the photographed object corresponding to the picture. It will be appreciated that step S410 and step S420 shown in fig. 4b may be step S110 and step S120 shown in fig. 1 a; the screen region and the identification region shown in FIG. 4b may be the screen region P shown in FIG. 4a 1 And an identification area P 2, The same contents are not described here again.
Next, step S430 is performed to identify the screen display area. And identifying the screen display area, namely identifying the screen display content in the display screen of the shot object. The content of the screen display in the display screen may be in various forms of numerals, letters, chinese characters, etc., and the disclosure is not limited in this respect.
Further, if the screen content in the screen region is identified, step S450 is entered, the identified screen content is saved as a final result of the picture corresponding to the screen content, and if the screen content is not identified, step S410 is entered, and the picture of the photographed object is re-acquired until the screen content is identified.
In one embodiment, the subject may be an electric energy meter. After the picture of the electric energy meter is obtained and the electric energy meter corresponding to the picture is determined according to the identification area on the electric energy meter, the content on the display screen of the electric energy meter can be further identified, so that the power consumption of a user is supervised and managed.
As described above, the picture and/or normalized data of the picture may likewise be identified according to an algorithmic model to determine the on-screen content of the on-screen region. The method for normalizing the pictures may refer to the method described in the foregoing disclosure, and will not be described herein.
In one embodiment, the on-screen content of the on-screen region may be identified according to an on-screen identification model. The screen display recognition model can be obtained through training in the following mode: acquiring pictures of a plurality of shot objects and naming the pictures according to a uniform format; labeling all named pictures according to the screen display content of the screen display area by using a labelImg (labelimage, open source visual image labeling tool), wherein the labeled content is the screen display content; dividing the marked pictures into a training set, a verification set and a test set according to the ratio of 8:1:1; training a screen recognition model based on an OCR (optical character recognition ) algorithm of a paddlefilled (flying oar, open source distributed deep learning platform); setting the learning rate, batch-size (batch size) and iteration times of screen recognition model training; during training, evaluating the screen display identification model, evaluating every 3 generations, and taking an optimal accuracy model after training; and testing the optimal accuracy model by using the test set until the accuracy of the screen display identification model reaches a set second accuracy threshold. The second accuracy threshold can be used for judging whether the accuracy of the screen display identification model meets the requirement of practical application, and the specific numerical value of the second accuracy threshold can be adjusted according to the practical application condition.
Further, the picture and/or normalized data of the picture are input into the screen recognition model. The screen display identification model can output a result according to the input picture and/or the normalized data of the picture, and the output result comprises the screen display identification result, the reliability and the detection frame position information. The detection frame position information comprises coordinates of upper left, upper right, lower left and lower right of the detection frame. And finally, storing the screen display content according to the output result.
Fig. 5 is a flowchart illustrating a method for identifying and photographing an electric meter according to an embodiment of the present disclosure, which will be further described with reference to fig. 5.
As shown in fig. 5, in step S501, a picture of a subject is acquired; the picture comprises an identification area and a screen display area. Next, in step S502, the identification area is identified to determine the object to be photographed to which the picture corresponds. It is to be understood that the step S501 and the step S502 shown in fig. 5 may be the step S110 and the step S120 shown in fig. 1a, and the same contents are not repeated herein.
Next, step S503 is performed, in response to successful identification of the identification area, to normalize the picture to obtain normalized data of the picture. It will be appreciated that the identification of the identified area may be successful or may fail due to factors such as lack of operator compliance with the specifications, or unacceptable picture quality. The successful identification of the identification area means that the shot object corresponding to the picture can be determined according to the identification area; otherwise, the identification area is considered to be failed to be identified. The method for normalizing the picture may be referred to above, and will not be described herein.
In one implementation scenario, when identification of an identified area fails, it is desirable to provide a supplemental scheme to avoid wasting job resources. Specifically, the method described in the present disclosure may further determine, in response to a failure to identify the identified area, whether the identified time exceeds a time threshold: if the identified time exceeds the time threshold, entering a manual mode; and if the identified time does not exceed the time threshold, acquiring a picture of the shot object. The time threshold may be adjusted according to the actual application, and disclosure is not limited in this respect; for example, 5S is possible, and a certain fault-tolerant space is provided while the identification efficiency is ensured. When the identification area is identified according to the method disclosed by the disclosure and the time threshold is not exceeded, returning to the initial state of the method disclosed by the disclosure, and re-acquiring the picture of the shot object; and when the identification area is identified to exceed the time threshold, entering a manual identification mode. Specifically, guide characters may be displayed in the preview screen to prompt the operator to perform a manual operation.
Next, step S504 is performed to identify the normalized data to determine whether the picture is a qualified picture. If the picture is not the qualified picture, returning to the step S501, and re-acquiring the picture of the shot object; if the picture is a qualified picture, the process proceeds to S506, and the number of objects to be photographed in the qualified picture is identified. The method for determining whether the picture is a qualified picture or not and the number of the shot objects in the picture can refer to the methods shown in fig. 2 and 3, and can also be adjusted according to the actual application situation, which is not described herein again.
As described above, when the number of the photographed objects in the picture is less than 1, it indicates that the obtained picture of the photographed object is incomplete. Therefore, in step S506, if the number of the photographed objects in the identified qualified picture is less than 1, returning to step S501 to acquire the picture of the photographed object; if the number of the shot objects in the identified qualified pictures is greater than or equal to 1, the process proceeds to step S507, where the qualified pictures and/or the saving paths of the qualified pictures with the number of the shot objects greater than or equal to 1 are saved. It can be understood that in step S505 and step S506, the picture is screened according to the quality of the picture and whether the shot object in the picture is complete, so the screened picture can be saved as the material for further identification. Specifically, in step S508, the normalized data of the saved picture may be identified to determine the screen content of the screen region. Finally, in step S509, the final result of the saved picture is generated from the on-screen content. The end result may be a document including an identification of the saved picture, on-screen content, for supervision and management of the photographed object.
According to the method shown in fig. 5, the identification efficiency can be greatly improved, the operation resources can be saved, the identification content can be completed in a reasonable time, and the reliability of the data can be further ensured.
In some possible implementations, aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a processor to perform the method described above when the program code is executed by the processor.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
It will be apparent that such embodiments are provided by way of example only. Numerous modifications, changes, and substitutions will occur to those skilled in the art without departing from the spirit and scope of the present disclosure. It should be understood that various alternatives to the embodiments of the disclosure described herein may be employed in practicing the disclosure. The appended claims are intended to define the scope of the disclosure and are therefore to cover all equivalents or alternatives falling within the scope of these claims.

Claims (13)

1. A method for identifying and photographing an electric energy meter, comprising the steps of:
acquiring a picture of the shot object; wherein the picture comprises an identification area;
and identifying the identification area to determine the shot object corresponding to the picture.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
in response to failure in identifying the identified region, determining whether the identified time exceeds the time threshold:
if the identified time exceeds the time threshold, entering a manual mode;
and if the identified time does not exceed the time threshold, acquiring the picture of the shot object.
3. The method of claim 1, wherein identifying the picture from the identified region comprises:
expanding the identification area to obtain an expanded area comprising the identification area;
and identifying the expansion area to determine the shot object corresponding to the picture.
4. The method of claim 1, wherein the identification region comprises a plurality of identification regions, and wherein identifying the picture according to the identification regions further comprises:
identifying the plurality of identification areas to determine the identification area with the largest area occupation ratio;
and identifying the identification area with the largest area ratio to determine the shot object corresponding to the picture.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
and responding to successful identification of the identification area, and carrying out normalization processing on the picture to obtain normalized data of the picture.
6. The method of claim 5, wherein the pictures further comprise qualified pictures,
the normalized data is identified to determine whether the picture is a qualified picture.
7. The method of claim 6, wherein the step of providing the first layer comprises,
if the picture is not a qualified picture, acquiring the picture of the shot object;
if the picture is a qualified picture, the number of the photographed objects in the qualified picture is identified.
8. The method according to claim 7, characterized in that
If the number of the shot objects in the identified qualified pictures is smaller than 1, obtaining pictures of the shot objects;
and if the number of the shot objects in the identified qualified pictures is greater than or equal to 1, saving the qualified pictures and/or saving paths of the qualified pictures, wherein the number of the shot objects is greater than or equal to 1.
9. The method of claim 8, wherein the picture further comprises a screen region,
identifying the stored normalized data of the picture to determine the screen display content of the screen display area;
and generating a final result of the saved picture according to the screen display content.
10. The method of claim 1, wherein the pictures further comprise qualified pictures,
identifying the picture to determine whether the picture is the qualified picture:
if the picture is not the qualified picture, acquiring the picture of the shot object;
and if the picture is the qualified picture, determining the qualified picture as a first picture to be processed.
11. The method of claim 1, wherein the method comprises the steps of,
identifying the picture to determine the number of the shot objects in the picture:
if the number of the shot objects in the picture is smaller than 1, acquiring the picture of the shot objects;
and if the number of the shot objects in the picture is not less than 1, determining the picture as a second picture to be processed.
12. The method of claim 1, wherein the picture further comprises a screen region,
identifying the screen display area:
if the screen display content of the screen display area is identified, generating a final result of the picture according to the screen display content;
and if the screen display content is not identified, acquiring the picture of the shot object.
13. A computer-readable storage medium having stored thereon computer program instructions for execution by one or more processors to perform the operations of the method of any of claims 1-13.
CN202410189115.3A 2024-02-20 2024-02-20 Method for recognizing and photographing electric energy meter and readable storage medium Pending CN117894031A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410189115.3A CN117894031A (en) 2024-02-20 2024-02-20 Method for recognizing and photographing electric energy meter and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410189115.3A CN117894031A (en) 2024-02-20 2024-02-20 Method for recognizing and photographing electric energy meter and readable storage medium

Publications (1)

Publication Number Publication Date
CN117894031A true CN117894031A (en) 2024-04-16

Family

ID=90644782

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410189115.3A Pending CN117894031A (en) 2024-02-20 2024-02-20 Method for recognizing and photographing electric energy meter and readable storage medium

Country Status (1)

Country Link
CN (1) CN117894031A (en)

Similar Documents

Publication Publication Date Title
US20210192227A1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
CN109784342B (en) OCR (optical character recognition) method and terminal based on deep learning model
US11790499B2 (en) Certificate image extraction method and terminal device
CN111210399B (en) Imaging quality evaluation method, device and equipment
WO2020038138A1 (en) Sample labeling method and device, and damage category identification method and device
CN111062903A (en) Automatic processing method and system for image watermark, electronic equipment and storage medium
CN111583180B (en) Image tampering identification method and device, computer equipment and storage medium
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN114689591A (en) Coiled material detection device, system and detection method based on line scanning camera
CN112215190A (en) Illegal building detection method based on YOLOV4 model
CN112967255A (en) Shield segment defect type identification and positioning system and method based on deep learning
CN111144372A (en) Vehicle detection method, device, computer equipment and storage medium
CN112464925A (en) Mobile terminal account opening data bank information automatic extraction method based on machine learning
CN113836850A (en) Model obtaining method, system and device, medium and product defect detection method
CN116740758A (en) Bird image recognition method and system for preventing misjudgment
CN113435407A (en) Small target identification method and device for power transmission system
CN114882204A (en) Automatic ship name recognition method
CN113780116A (en) Invoice classification method and device, computer equipment and storage medium
CN113128522A (en) Target identification method and device, computer equipment and storage medium
KR102230559B1 (en) Method and Apparatus for Creating Labeling Model with Data Programming
CN111008635A (en) OCR-based multi-bill automatic identification method and system
CN117894031A (en) Method for recognizing and photographing electric energy meter and readable storage medium
CN112861861B (en) Method and device for recognizing nixie tube text and electronic equipment
CN115170829A (en) System and method for monitoring and identifying foreign matters in generator rotor vent hole
CN114373116A (en) Method, device, equipment and product for screening operation and maintenance site images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination