CN112364780A - Method for identifying state of indicator lamp - Google Patents
Method for identifying state of indicator lamp Download PDFInfo
- Publication number
- CN112364780A CN112364780A CN202011264627.XA CN202011264627A CN112364780A CN 112364780 A CN112364780 A CN 112364780A CN 202011264627 A CN202011264627 A CN 202011264627A CN 112364780 A CN112364780 A CN 112364780A
- Authority
- CN
- China
- Prior art keywords
- indicator lamp
- indicator light
- indicator
- training
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Abstract
The invention relates to the field of operation and maintenance of power equipment, in particular to a method for identifying states of an indicator lamp. The method comprises the following steps: acquiring original image data of an indicator light; extracting characteristic information of the indicator light by adopting a target detection method, predicting the position, positioning the indicator light and cutting a target; judging the state of the cut indicator lamp by using a classifier; and transmitting the state information and the attribute information of the indicator lamp to a monitoring system. The positioning of the ROI area of the indicator lamp is realized through a target detection method, furthermore, the color and the on-off state of the indicator lamp are classified through a classifier, the processing method independent of color features is not adopted, the factors such as environmental illumination and the like are not small, the identification efficiency of the state of the indicator lamp under a complex background can be effectively improved, the manual workload is reduced, the error rate is extremely low, and the data management is easier.
Description
Technical Field
The invention relates to the field of operation and maintenance of power equipment, in particular to a method for identifying states of an indicator lamp.
Background
At present, an indicator lamp, a pressing plate and an air switch are important contents for routing inspection of an indoor transformer substation, and play an important role in daily power scheduling and operation and maintenance. At present, the polling task of an indoor transformer substation is mostly in a manual polling mode, and polling personnel regularly patrol indicator lamps, pressing plates and air switches on a screen cabinet of the indoor transformer substation. However, there are a lot of clamp plates, pilot lamps and air switches in the indoor transformer substation, and especially the pilot lamp target is little and indexible, relies on artifical tour to have the shortcoming such as inefficiency, error rate and data management difficulty, human cost height.
Disclosure of Invention
The invention aims to solve the technical problem of providing an indicator lamp state identification method, and solves the problems of low efficiency, difficult error rate and data management and high labor cost of manual inspection.
The technical scheme for solving the technical problems is as follows: an indicator lamp state identification method comprises the following steps:
s1, acquiring original image data of the indicator light;
s2, extracting characteristic information of the indicator light by adopting a target detection method, predicting the position, positioning the indicator light and cutting a target;
s3, judging the state of the cut indicator lamp by using a classifier;
and S4, transmitting the status information and the attribute information of the indicator lamp to the monitoring system.
Further, in step S1, the original image data includes a photo and/or a video.
Further, in step S2, the method for target clipping includes the following steps:
m1, processing the original image data and establishing a label data file;
m2, creating an indicator light target detection model and training the indicator light target detection model;
m3, inputting the original image of the indicator lamp to be detected into the trained indicator lamp target detection model; the method comprises the steps of detecting an original image of an indicator lamp to be detected, including the indicator lamp, marking the position of the indicator lamp, cutting and extracting the position of the indicator lamp, and forming an indicator lamp image.
Further, the indicator light target detection model is a YOLOV3 network.
Further, in step M1, the method for creating a tag data file includes the following steps:
p1, carrying out equalization processing on the acquired original image data;
p2, establishing and forming an indicator light training image library by using the equalized original image data;
and P3, marking the original image data in the indicator light training image library to form a label data file.
Further, the label data file is an xml label file in a Pascal VOC format, and comprises an image ID, an image path, an image name, and image target pixel height and width information.
Further, in step M2, the method for training the indicator light target detection model includes:
q1, dividing the label data file into a training set and a test set according to the proportion;
q2, performing model training by adopting a training set;
q3, test set model parameter curing.
Further, in step S3, the method for determining the status of the indicator light includes:
n1, processing the indicator light sample and establishing a label file;
n2, creating an indicator light state classification model and training the indicator light state classification model;
n3, inputting the indicator light image into the trained indicator light state classification model; and outputting the result with the highest confidence coefficient.
Further, the indicator light state classification model adopts a VGG classification network.
Further, in step N2, the training method of the indicator light state classification model includes:
o1, dividing the label files into a training set and a test set according to the proportion;
o2, performing model training by adopting a training set;
and O3, carrying out model parameter solidification on the test set.
The invention provides an indicating lamp state identification method, which comprises the following steps:
s1, acquiring original image data of the indicator light;
s2, extracting characteristic information of the indicator light by adopting a target detection method, predicting the position, positioning the indicator light and cutting a target;
s3, judging the state of the cut indicator lamp by using a classifier;
and S4, transmitting the status information and the attribute information of the indicator lamp to the monitoring system.
Therefore, the positioning of the ROI of the indicator lamp is realized through a target detection method, the color and the on-off state of the indicator lamp are further classified through a classifier, the color characteristic processing method is not depended on, the factors such as environmental illumination and the like are avoided, the identification efficiency of the state of the indicator lamp under a complex background can be effectively improved, the manual workload is reduced, the error rate is extremely low, and the data management is easier.
Drawings
FIG. 1 is a schematic flow chart of an indicating lamp status recognition method according to the present invention;
FIG. 2 is a schematic diagram illustrating a target clipping process in the status recognition method of the indicator light according to the present invention;
FIG. 3 is a schematic diagram illustrating a process of creating a tag data file according to the method for identifying the status of an indicator light of the present invention;
FIG. 4 is a schematic diagram of a training process of an indicator target detection model in the indicator state identification method of the present invention;
FIG. 5 is a schematic diagram illustrating a status determination process of an indicator in the status identification method of the indicator according to the present invention;
FIG. 6 is a schematic diagram of a training process of an indicator light state classification model in the indicator light state identification method of the present invention;
FIG. 7 is a flowchart illustrating step S2 of the status identification method of the indicator light according to the present invention;
fig. 8 is a flowchart illustrating step S3 in the indicator light status identification method according to the present invention.
Detailed Description
The principles and features of this invention are described below in conjunction with the following drawings, which are set forth by way of illustration only and are not intended to limit the scope of the invention.
As shown in fig. 1 to 8, the present invention provides an indicator light status identification method, which includes the following steps:
s1, acquiring original image data of the indicator light;
s2, extracting characteristic information of the indicator light by adopting a target detection method, predicting the position, positioning the indicator light and cutting a target;
s3, judging the state of the cut indicator lamp by using a classifier;
and S4, transmitting the status information and the attribute information of the indicator lamp to the monitoring system.
Therefore, the positioning of the ROI of the indicator lamp is realized through a target detection method, the color and the on-off state of the indicator lamp are further classified through a classifier, the color characteristic processing method is not depended on, the factors such as environmental illumination and the like are avoided, the identification efficiency of the state of the indicator lamp under a complex background can be effectively improved, the manual workload is reduced, the error rate is extremely low, and the data management is easier.
As shown in fig. 1 to 8, the method for identifying the status of the indicator light according to the present invention may further include, based on the above-described technical solutions: in step S1, the original image data includes a photo and/or a video. Therefore, the photos and videos are the most direct and convenient of the available data, the data acquisition is easy, and the cost is low; the color model used by the acquired original image is RGB, HSV or Lab, the RGB color model is called device-dependent color model, and the color domain covered by the RGB color model depends on the color characteristics of the fluorescent dots of the display device and is hardware-dependent. It is the most used and familiar color model we use. It adopts a three-dimensional rectangular coordinate system. The primary colors of red, green and blue are additive primary colors, and the primary colors can be mixed together to generate a composite color; the HSV color model represents a color gamut that is a subset of the international commission on illumination chromaticity diagram, where saturation is one hundred percent of color and purity is typically less than one hundred percent. At the apex (i.e., origin) of the cone, V is 0, H and S are undefined and represent black, and at the center of the top surface of the cone, S is 0, V is 1, H is undefined and represents white. From this point to the origin, represents a gray with a gradually darker brightness, i.e. a gray with a different gray scale. For these points, S is 0, the value of H is undefined, white is added to a pure color to change the color depth, black is added to change the color depth, and white is added in different proportions, and black can obtain various hues; the Lab color model is a color model defined by the International Commission on illumination, and any point color in nature can be expressed in Lab space, and the color space of the Lab color model is larger than that of RGB space. In addition, this mode describes human visual perception in a digital manner, independent of the device, so it makes up for the deficiency that the RGB and CMYK modes must rely on the device color characteristics. In the present application, the color model used for the acquired original image is preferably RGB.
As shown in fig. 1 to 8, the method for identifying the status of the indicator light according to the present invention may further include, based on the above-described technical solutions: in step S2, the method for target clipping includes the following steps:
m1, processing the original image data and establishing a label data file;
m2, creating an indicator light target detection model and training the indicator light target detection model;
m3, inputting the original image of the indicator lamp to be detected into the trained indicator lamp target detection model; the method comprises the steps of detecting an original image of an indicator lamp to be detected, including the indicator lamp, marking the position of the indicator lamp, cutting and extracting the position of the indicator lamp, and forming an indicator lamp image.
In this way, a sufficient amount of raw image data is collected, the sample scene of the raw image data should be as consistent as possible with step S1, the different color indicator light samples should be as balanced as possible, and the label file corresponding to the samples should be made. Selecting a target detection network and initializing network parameters, and performing network parameter training by adopting a random gradient descent mode. And (3) when the training steps reach 10 thousands of steps or the loss function gradually tends to converge, verifying the network, stopping training when the miss rate of the recognition target is less than 5%, and otherwise, repeating the step M3 to continue network parameter training.
As shown in fig. 1 to 8, the method for identifying the status of the indicator light according to the present invention may further include, based on the above-described technical solutions: the target detection method is an image target detection method adopting a deep convolutional neural network, and includes but is not limited to SSD, YOLO and Faster RCNN. Therefore, the YOLO target detection method adopts full image information for prediction, the image processing by using the YOLO is simple and direct, the prediction process is simple and fast, the YOLO target detection method can learn the general information of the target, and has certain universality. The accuracy of the fast RCNN target algorithm is very high, the SSD combines the regression idea in the YoLO and the Anchor mechanism in the fast-RCNN, and the multi-scale regions of all positions of the whole graph are used for regression, so that the characteristic of high YoLO speed is kept, and the window prediction is more accurate as that of the fast-RCNN.
As shown in fig. 1 to 8, the method for identifying the status of the indicator light according to the present invention may further include, based on the above-described technical solutions: the indicator light target detection model is a YOLOV3 network. Thus, YOLOv3 processed 608X608 images on Pascal Titan X at a speed of up to 20FPS, mapp @0.5 on COCO test-dev of up to 57.9%, similar to the results for the single stage network, and 4 times faster. The model of YOLO v3 is much more complex than previous models, and speed can be traded off against accuracy by changing the size of the model structure.
As shown in fig. 1 to 8, the method for identifying the status of the indicator light according to the present invention may further include, based on the above-described technical solutions: in step M1, the method for creating a tag data file includes the following steps:
p1, carrying out equalization processing on the acquired original image data;
p2, establishing and forming an indicator light training image library by using the equalized original image data;
and P3, marking the original image data in the indicator light training image library to form a label data file.
Thus, image data is prepared. Firstly, the acquired original indicating lamp data is equalized, and the quantity proportion of the indicating lamps of different types is close or the difference is not large. Aiming at the condition of large difference of the number of samples, adjusting by adopting sample increment expansion methods such as rotation, overturning, noise, color dithering and the like, and keeping the distribution balance of various samples; and then establishing an indicator light training image library, labeling each target image in the target detection library, and establishing a label data file. The further preferred technical scheme is as follows: the label file meets the xml label file standard of the Pascal VOC format, including image ID, image path, image name, image target pixel height and width. Wherein the pixel height and width of the image object are represented by four coordinates of a rectangular box, including xmax, xmin, ymax, ymin. Where (xmin, ymin) is the coordinates of the top left vertex of the rectangular box and (xmax, ymax) is the coordinates of the bottom right vertex of the rectangular box.
As shown in fig. 1 to 8, the method for identifying the status of the indicator light according to the present invention may further include, based on the above-described technical solutions: in the step M2, the training method of the target detection model of the indicator light is as follows:
q1, dividing the label data file into a training set and a test set according to the proportion;
q2, performing model training by adopting a training set;
q3, test set model parameter curing.
Like this, divide into training set and test set with pilot lamp training image storehouse by proportion, adopt the training set to carry out the model training, the test set carries out the solidification of model parameter, wherein the step of solidification model is: storing the model parameters at certain steps in the training process, testing the stored model parameters in a data set, selecting a model with stable performance, further storing parameters such as weight, bias and the like, and simultaneously covering an unstable model; and (4) inputting the original image data of the indicator lamp to be detected into the target detection network model trained in the step M2, marking the position of the indicator lamp and cutting and extracting when the indicator lamp target is detected and the confidence coefficient is more than 50%, otherwise, not containing the indicator lamp target in the input image.
As shown in fig. 1 to 8, the method for identifying the status of the indicator light according to the present invention may further include, based on the above-described technical solutions: in step S3, the method for determining the status of the indicator light includes:
n1, processing the indicator light sample and establishing a label file;
n2, creating an indicator light state classification model and training the indicator light state classification model;
n3, inputting the indicator light image into the trained indicator light state classification model; and outputting the result with the highest confidence coefficient.
Thus, a sufficient number of indicator light samples are collected, the image samples only contain indicator light targets, the indicator light samples with different colors and different states (on and off) are kept balanced as much as possible, and label files corresponding to the samples are manufactured; selecting an image classification network, initializing network parameters, and training the network parameters in a random gradient descent mode; and (4) when the training steps reach 10 thousands of steps or the loss function gradually tends to converge, verifying the network, stopping training when the classification accuracy is more than 95%, and otherwise, repeating the step N3 to continue network parameter training.
As shown in fig. 1 to 8, the method for identifying the status of the indicator light according to the present invention may further include, based on the above-described technical solutions: the indicator lamp state classification model adopts a VGG classification network. Thus, the VGG network has a deep layer depth, and can be applied to a large data set, and the initial proposal is also to solve the problem of classifying and positioning 1000 types of images in ImageNet. VGG uses a small convolution kernel (3 × 3 dominant). The size of the convolution kernel affects the parameter quantity (training difficulty, whether the model is convenient to deploy to a mobile terminal) and the receptive field (updating of parameters, the size of a feature map, whether the features are extracted enough or not, and the complexity of the model) of the model. The VGG network adopts the number of deep network layers and small convolution kernels, can reduce model parameters on the premise of ensuring a receptive field (the superposition of two convolution layers of 3X 3 is equivalent to a convolution kernel of 5X 5, the superposition of convolution kernels of 3X 3 is equivalent to a convolution kernel of 7X 7, and the parameters are less)
In addition, the pooling layer of the VGG network is changed from the kernel size of AlexNet being 33, the max-pooling of stride being 2 to the kernel size of both being 22 and the max-pooling of stride being 2, which is convenient for better acquiring the detailed information of the graph.
As shown in fig. 1 to 8, the method for identifying the status of the indicator light according to the present invention may further include, based on the above-described technical solutions: in step N2, the training method of the indicator light state classification model includes:
o1, dividing the label files into a training set and a test set according to the proportion;
o2, performing model training by adopting a training set;
and O3, carrying out model parameter solidification on the test set.
Data preparation and network training are substantially the same as in step M2, with the specific differences embodied in the following steps; the indicator light sample is an image only containing indicator lights, namely the area of the indicator lights accounts for not less than 75% of the area of the image, the indicator light sample label only contains the color and state label information of the indicator lights, in the example, binary coding is adopted for labeling, the color is 00, the color is 01, the color is 10, and the color is 11. The off state of the indicator light is 0, and the on state is 1; if the white indicator light is on 001, the red indicator light is off 010; and the image to be detected is an indicator light image cut after target recognition, and the result with the highest confidence coefficient is used as output.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (10)
1. An indicator lamp state identification method is characterized by comprising the following steps:
s1, acquiring original image data of the indicator light;
s2, extracting characteristic information of the indicator light by adopting a target detection method, predicting the position, positioning the indicator light and cutting a target;
s3, judging the state of the cut indicator lamp by using a classifier;
and S4, transmitting the status information and the attribute information of the indicator lamp to the monitoring system.
2. The indicator lamp status recognition method according to claim 1, characterized in that: in step S1, the original image data includes a photo and/or a video.
3. The indicator lamp status recognition method according to claim 1, characterized in that: in step S2, the method for target clipping includes the following steps:
m1, processing the original image data and establishing a label data file;
m2, creating an indicator light target detection model and training the indicator light target detection model;
m3, inputting the original image of the indicator lamp to be detected into the trained indicator lamp target detection model; the method comprises the steps of detecting an original image of an indicator lamp to be detected, including the indicator lamp, marking the position of the indicator lamp, cutting and extracting the position of the indicator lamp, and forming an indicator lamp image.
4. The indicator lamp status recognition method according to claim 3, characterized in that: the indicator light target detection model is a YOLOV3 network.
5. The indicator lamp status recognition method according to claim 3, characterized in that: in step M1, the method for creating a tag data file includes the following steps:
p1, carrying out equalization processing on the acquired original image data;
p2, establishing and forming an indicator light training image library by using the equalized original image data;
and P3, marking the original image data in the indicator light training image library to form a label data file.
6. The indicator lamp status recognition method according to claim 5, characterized in that: the label data file is an xml label file in a Pascal VOC format and comprises an image ID, an image path, an image name, and image target pixel height and width information.
7. The indicator lamp status recognition method according to claim 5, characterized in that: in the step M2, the training method of the target detection model of the indicator light is as follows:
q1, dividing the label data file into a training set and a test set according to the proportion;
q2, performing model training by adopting a training set;
q3, test set model parameter curing.
8. The indicator lamp status recognition method according to claim 3, characterized in that: in step S3, the method for determining the status of the indicator light includes:
n1, processing the indicator light sample and establishing a label file;
n2, creating an indicator light state classification model and training the indicator light state classification model;
n3, inputting the indicator light image into the trained indicator light state classification model; and outputting the result with the highest confidence coefficient.
9. The status identification method of an indicator light of claim 8, wherein: the indicator lamp state classification model adopts a VGG classification network.
10. The status identification method of an indicator light of claim 8, wherein: in step N2, the training method of the indicator light state classification model includes:
o1, dividing the label files into a training set and a test set according to the proportion;
o2, performing model training by adopting a training set;
and O3, carrying out model parameter solidification on the test set.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011264627.XA CN112364780A (en) | 2020-11-11 | 2020-11-11 | Method for identifying state of indicator lamp |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011264627.XA CN112364780A (en) | 2020-11-11 | 2020-11-11 | Method for identifying state of indicator lamp |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112364780A true CN112364780A (en) | 2021-02-12 |
Family
ID=74514621
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011264627.XA Pending CN112364780A (en) | 2020-11-11 | 2020-11-11 | Method for identifying state of indicator lamp |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112364780A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113345036A (en) * | 2021-05-24 | 2021-09-03 | 广西电网有限责任公司电力科学研究院 | HSV (hue, saturation, value) feature transformation based indicator lamp state identification method |
CN114821194A (en) * | 2022-05-30 | 2022-07-29 | 深圳市科荣软件股份有限公司 | Equipment running state identification method and device |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107392116A (en) * | 2017-06-30 | 2017-11-24 | 广州广电物业管理有限公司 | A kind of indicator lamp recognition methods and system |
CN108875608A (en) * | 2018-06-05 | 2018-11-23 | 合肥湛达智能科技有限公司 | A kind of automobile traffic signal recognition method based on deep learning |
CN110069986A (en) * | 2019-03-13 | 2019-07-30 | 北京联合大学 | A kind of traffic lights recognition methods and system based on mixed model |
CN111639647A (en) * | 2020-05-22 | 2020-09-08 | 深圳市赛为智能股份有限公司 | Indicating lamp state identification method and device, computer equipment and storage medium |
CN111666824A (en) * | 2020-05-14 | 2020-09-15 | 浙江工业大学 | Color attribute and machine learning-based indicator light identification method for mobile robot |
-
2020
- 2020-11-11 CN CN202011264627.XA patent/CN112364780A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107392116A (en) * | 2017-06-30 | 2017-11-24 | 广州广电物业管理有限公司 | A kind of indicator lamp recognition methods and system |
CN108875608A (en) * | 2018-06-05 | 2018-11-23 | 合肥湛达智能科技有限公司 | A kind of automobile traffic signal recognition method based on deep learning |
CN110069986A (en) * | 2019-03-13 | 2019-07-30 | 北京联合大学 | A kind of traffic lights recognition methods and system based on mixed model |
CN111666824A (en) * | 2020-05-14 | 2020-09-15 | 浙江工业大学 | Color attribute and machine learning-based indicator light identification method for mobile robot |
CN111639647A (en) * | 2020-05-22 | 2020-09-08 | 深圳市赛为智能股份有限公司 | Indicating lamp state identification method and device, computer equipment and storage medium |
Non-Patent Citations (1)
Title |
---|
何富贵: "《Python深度学习 逻辑、算法与编程实战》", 30 September 2020 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113345036A (en) * | 2021-05-24 | 2021-09-03 | 广西电网有限责任公司电力科学研究院 | HSV (hue, saturation, value) feature transformation based indicator lamp state identification method |
CN114821194A (en) * | 2022-05-30 | 2022-07-29 | 深圳市科荣软件股份有限公司 | Equipment running state identification method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109711319B (en) | Method and system for establishing imperfect grain image recognition sample library | |
CN103891294B (en) | The apparatus and method coded and decoded for HDR image | |
CN108268895A (en) | The recognition methods of tobacco leaf position, electronic equipment and storage medium based on machine vision | |
US8879849B2 (en) | System and method for digital image signal compression using intrinsic images | |
CN112364780A (en) | Method for identifying state of indicator lamp | |
CN103646392B (en) | Backlighting detecting and equipment | |
CN105740774A (en) | Text region positioning method and apparatus for image | |
CN103402117A (en) | Method for detecting color cast of video image based on Lab chrominance space | |
Zakir et al. | Road sign segmentation based on colour spaces: A Comparative Study | |
US10417772B2 (en) | Process to isolate object of interest in image | |
CN108564631A (en) | Car light light guide acetes chinensis method, apparatus and computer readable storage medium | |
Ganesan et al. | Value based semi automatic segmentation of satellite images using HSV color space, histogram equalization and modified FCM clustering algorithm | |
CN108734074A (en) | Fingerprint identification method and fingerprint identification device | |
CN105679277A (en) | Color adjusting method of display screen and mobile terminal | |
CN111311500A (en) | Method and device for carrying out color restoration on image | |
CN110719382B (en) | Color replacement method and device | |
CN106960188B (en) | Weather image classification method and device | |
CN114926661B (en) | Textile surface color data processing and identifying method and system | |
CN110160750B (en) | LED display screen visual detection system, detection method and detection device | |
Wang et al. | Multi-angle automotive fuse box detection and assembly method based on machine vision | |
CN102930289A (en) | Method for generating mosaic picture | |
CN115546141A (en) | Small sample Mini LED defect detection method and system based on multi-dimensional measurement | |
CN109040598A (en) | Image processing method, device, computer readable storage medium and electronic equipment | |
CN105631812A (en) | Control method and control device for performing color enhancement on displayed image | |
CN109584228A (en) | Rotor winding image detecting method based on bianry image and model transfer learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210212 |
|
RJ01 | Rejection of invention patent application after publication |