CN116863333A - AI intelligent detection method for FSU equipment working state - Google Patents

AI intelligent detection method for FSU equipment working state Download PDF

Info

Publication number
CN116863333A
CN116863333A CN202310782184.0A CN202310782184A CN116863333A CN 116863333 A CN116863333 A CN 116863333A CN 202310782184 A CN202310782184 A CN 202310782184A CN 116863333 A CN116863333 A CN 116863333A
Authority
CN
China
Prior art keywords
image
target
contour
fsu
indicator lamp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310782184.0A
Other languages
Chinese (zh)
Inventor
李欠平
唐建中
温庆芝
孙宗宗
裴福平
罗焕平
张华斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mastercom Technology Corp
Original Assignee
Shenzhen Mastercom Technology Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mastercom Technology Corp filed Critical Shenzhen Mastercom Technology Corp
Priority to CN202310782184.0A priority Critical patent/CN116863333A/en
Publication of CN116863333A publication Critical patent/CN116863333A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses an AI intelligent detection method for an FSU device working state, which relates to the field of image processing, and comprises the following steps: identifying the position of an indicator lamp in the acquired image based on a preset target identification model, and intercepting an area image where the position of the indicator lamp is located as an indicator lamp image; converting the indicator light image into an HSV indicator light image, and carrying out color extraction on the HSV indicator light image based on a preset color threshold range so as to obtain a target indicator light image; and carrying out contour extraction on the target indicator lamp image to obtain a target indicator lamp contour image, and determining the working state of the FSU equipment corresponding to the acquired image according to the area of the contour in the target indicator lamp contour image. By adopting the technical scheme of the application, the efficiency and the accuracy of identifying the working state of the FSU equipment can be improved.

Description

AI intelligent detection method for FSU equipment working state
Technical Field
The application relates to the technical field of image processing, in particular to an AI intelligent detection method for the working state of FSU equipment.
Background
FSU (Field Supervision Unit, end station data collector) equipment exists as single data management unit for gathering intelligent device data or this board sensor data, and the data that gathers is uploaded to the server platform, and FSU can also monitor the state of the equipment that connects, so that know the running state of equipment at any time.
At present, the quality inspection method of the working state of the FSU equipment mainly adopts a manual quality inspection method or a simple target detection quality inspection method. Wherein, artifical quality testing is by first-line constructor uploading system that shoots, then opens the FSU equipment photo of uploading by the quality testing personnel manual work, checks FSU operating condition, and artifical quality testing is the quality testing afterwards in addition, when meetting unqualified condition, needs constructor secondary to get back to the scene and reforms, inefficiency, and artifical quality testing has the condition that the quality testing is opaque, not canonical in addition, and the supervision is more difficult. The simple target detection quality inspection method mainly collects related pictures of FSU equipment, and the detection targets are directly identified through simple training, but because the field environment is complex, a plurality of interference factors exist, the identification accuracy is low, and the quality inspection quality cannot be guaranteed.
In summary, how to improve the efficiency and accuracy of identifying the working state of the FSU device is a significant technical problem in the art.
Disclosure of Invention
The application mainly aims to provide an AI (Artificial Intelligence artificial intelligence) intelligent detection method for the working state of FSU equipment, which aims to improve the efficiency and accuracy of the identification of the working state of the FSU equipment.
In order to achieve the above object, the present application provides an AI intelligent detection method for an FSU device working state, where the AI intelligent detection method for an FSU device working state includes:
identifying the position of an indicator lamp in the acquired image based on a preset target identification model, and intercepting an area image where the position of the indicator lamp is located as an indicator lamp image;
converting the indicator light image into an HSV (Hue Saturation Value image format) indicator light image, and carrying out color extraction on the HSV indicator light image based on a preset color threshold range so as to obtain a target indicator light image;
and carrying out contour extraction on the target indicator lamp image to obtain a target indicator lamp contour image, and determining the working state of the FSU equipment corresponding to the acquired image according to the area of the contour in the target indicator lamp contour image.
Optionally, the step of determining the working state of the FSU device corresponding to the acquired image according to the area of the contour in the contour image of the target indicator light includes:
filtering a contour with an area smaller than a preset first area threshold value in the contour image of the target indicator lamp to obtain a target contour;
judging whether the area of the target contour exceeds a preset second area threshold, wherein the second area threshold is larger than or equal to the first area threshold;
and if the area of the target contour exceeds the second area threshold, determining that the working state of the FSU equipment corresponding to the acquired image is a normal state.
Optionally, before the step of identifying the position of the indicator light in the acquired image based on the preset object identification model, the method further comprises:
training data is input into an initial target recognition model based on a target detection algorithm of YOLO (You Only Look Once) to perform training so as to obtain a target recognition model comprising a weight parameter file and a target detection frame, wherein the training data is an FSU device image of a used detection frame marking indicator lamp position.
Optionally, the step of identifying the position of the indicator light in the collected image based on the preset target identification model includes:
inputting an acquired image to the target recognition model;
and identifying the position of the indicator lamp in the acquired image according to the weight parameter file, and labeling the position of the indicator lamp through the target detection frame.
Optionally, the step of performing color extraction on the HSV indicator light image based on a preset color threshold range to obtain a target indicator light image includes:
traversing pixel points in the HSV indicator light image, and judging whether the color of the pixel points belongs to a preset color threshold range or not;
if the color of the pixel point belongs to the color threshold range, extracting the pixel point;
and generating a target indicator lamp image according to each extracted pixel point.
Optionally, the step of performing contour extraction on the target indicator image to obtain a target indicator contour image includes:
identifying a contour edge in the target indicator image based on a Canny edge detection algorithm;
and carrying out contour extraction on the target indicator lamp image according to the contour edge to obtain a target indicator lamp contour image.
Optionally, after the step of determining the working state of the FSU device corresponding to the acquired image according to the area of the contour in the contour image of the target indicator, the method further includes:
if the working state is a normal state, outputting normal prompt information;
if the working state is abnormal, outputting abnormal prompt information and debugging suggestions.
The AI intelligent detection method for the working state of the FSU device provided by the embodiment of the application comprises the following steps: identifying the position of an indicator lamp in the acquired image based on a preset target identification model, and intercepting an area image where the position of the indicator lamp is located as an indicator lamp image; converting the indicator light image into an HSV indicator light image, and carrying out color extraction on the HSV indicator light image based on a preset color threshold range so as to obtain a target indicator light image; and carrying out contour extraction on the target indicator lamp image to obtain a target indicator lamp contour image, and determining the working state of the FSU equipment corresponding to the acquired image according to the area of the contour in the target indicator lamp contour image.
Compared with the AI intelligent detection method of the traditional FSU equipment working state, the method provided by the application has the advantages that the position of the indicator lamp in the collected image is identified through the preset target identification model, the area image where the position of the indicator lamp is located is intercepted, and the area image is used as the indicator lamp image; then, converting the indicator light image into an HSV indicator light image, extracting corresponding colors from the HSV indicator light image based on a preset color threshold range, and taking the colors extracted according to the color threshold range as target indicator light images; and finally, carrying out contour extraction on the target indicator lamp image to obtain a target indicator lamp contour image, and determining the working state of the FSU equipment corresponding to the acquired image according to the area of one or more contours in the target indicator lamp contour image.
Therefore, the application identifies the position of the indicator lamp for collecting the image through the preset target identification model, then extracts HSV color from the identification area, and determines the working state of the FSU equipment by adopting the edge detection technology, thereby greatly improving the efficiency and accuracy of identifying the working state of the FSU equipment.
Drawings
FIG. 1 is a schematic flow chart of a first embodiment of an AI intelligent detection method for operating status of FSU device;
fig. 2 is a schematic flow chart of a second embodiment of an AI intelligent detection method for an FSU device operating state according to the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear … …) in the embodiments of the present application are merely used to explain the relative positional relationship, movement, etc. between the components in a particular posture (as shown in the drawings), and if the particular posture is changed, the directional indicator is changed accordingly.
In the present application, unless specifically stated and limited otherwise, the terms "connected," "affixed," and the like are to be construed broadly, and for example, "affixed" may be a fixed connection, a removable connection, or an integral body; can be mechanically or electrically connected; either directly or indirectly, through intermediaries, or both, may be in communication with each other or in interaction with each other, unless expressly defined otherwise. The specific meaning of the above terms in the present application can be understood by those of ordinary skill in the art according to the specific circumstances.
Furthermore, descriptions such as those referred to as "first," "second," and the like, are provided for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implying an order of magnitude of the indicated technical features in the present disclosure. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present application.
The embodiment of the application provides the overall conception of each embodiment of the AI intelligent detection method for the working state of the FSU equipment.
In the embodiment of the application, the FSU equipment exists as a single data management unit and is used for collecting intelligent equipment data or local board sensor data and uploading the collected data to a server platform, and the FSU can monitor the state of the connected equipment so as to know the running state of the equipment at any time.
At present, the quality inspection method of the working state of the FSU equipment mainly adopts a manual quality inspection method or a simple target detection quality inspection method. Wherein, artifical quality testing is by first-line constructor uploading system that shoots, then opens the FSU equipment photo of uploading by the quality testing personnel manual work, checks FSU operating condition, and artifical quality testing is the quality testing afterwards in addition, when meetting unqualified condition, needs constructor secondary to get back to the scene and reforms, inefficiency, and artifical quality testing has the condition that the quality testing is opaque, not canonical in addition, and the supervision is more difficult. The simple target detection quality inspection method mainly collects related pictures of FSU equipment, and the detection targets are directly identified through simple training, but because the field environment is complex, a plurality of interference factors exist, the identification accuracy is low, and the quality inspection quality cannot be guaranteed.
In summary, how to improve the efficiency and accuracy of identifying the working state of the FSU device is a significant technical problem in the art.
Aiming at the problems, the embodiment of the application provides an AI intelligent detection method for the working state of FSU equipment, which comprises the following steps: identifying the position of an indicator lamp in the acquired image based on a preset target identification model, and intercepting an area image where the position of the indicator lamp is located as an indicator lamp image; converting the indicator light image into an HSV indicator light image, and carrying out color extraction on the HSV indicator light image based on a preset color threshold range so as to obtain a target indicator light image; and carrying out contour extraction on the target indicator lamp image to obtain a target indicator lamp contour image, and determining the working state of the FSU equipment corresponding to the acquired image according to the area of the contour in the target indicator lamp contour image.
Compared with the AI intelligent detection method of the traditional FSU equipment working state, the method provided by the application has the advantages that the position of the indicator lamp in the collected image is identified through the preset target identification model, the area image where the position of the indicator lamp is located is intercepted, and the area image is used as the indicator lamp image; then, converting the indicator light image into an HSV indicator light image, extracting corresponding colors from the HSV indicator light image based on a preset color threshold range, and taking the colors extracted according to the color threshold range as target indicator light images; and finally, carrying out contour extraction on the target indicator lamp image to obtain a target indicator lamp contour image, and determining the working state of the FSU equipment corresponding to the acquired image according to the area of one or more contours in the target indicator lamp contour image.
Therefore, the application identifies the position of the indicator lamp for collecting the image through the preset target identification model, then extracts HSV color from the identification area, and determines the working state of the FSU equipment by adopting the edge detection technology, thereby greatly improving the efficiency and accuracy of identifying the working state of the FSU equipment.
Based on the above general conception of the AI intelligent detection method for the working state of the FSU device, various embodiments of the AI intelligent detection method for the working state of the FSU device are provided.
Referring to fig. 1, fig. 1 is a flowchart of a first embodiment of an AI intelligent detection method for an FSU device operating state according to the present application. It should be noted that although a logical order is depicted in the flowchart, in some cases the steps depicted or described may be performed in a different order than presented herein.
In this embodiment, for convenience of understanding and explanation, in this embodiment, terminal devices are used as direct execution bodies to explain the AI intelligent detection method for the working state of the FSU device of the present application.
As shown in fig. 1, in this embodiment, the AI intelligent detection method for the working state of the FSU device of the present application may include:
step 10, identifying the position of an indicator lamp in the acquired image based on a preset target identification model, and intercepting an area image where the position of the indicator lamp is located as an indicator lamp image;
in this embodiment, the terminal device collects an image of the FSU device through an external collecting device, the collected image includes an indicator light of the FSU device, after the terminal device collects the image, the terminal device identifies a position of the indicator light in the collected image based on a preset target identification model, and also intercepts an area image of the indicator light in the collected image based on the target identification model to serve as an indicator light image.
It should be noted that, in this embodiment, the preset target recognition model is a prediction model constructed based on a target detection algorithm, and the YOLO algorithm greatly accelerates the recognition speed of the position of the indicator light in the acquired image by converting the target detection task into a regression problem.
In addition, in a possible embodiment, the terminal device collects multiple images of the same FSU device, and the target recognition model can process the multiple images simultaneously to recognize the positions of the respective indicator lights in the multiple images, so that accuracy of identifying the positions of the indicator lights is improved.
Step S20, converting the indicator light image into an HSV indicator light image, and carrying out color extraction on the HSV indicator light image based on a preset color threshold range so as to obtain a target indicator light image;
in this embodiment, a color threshold range is preset in the terminal device, the terminal device converts the intercepted indicator light image into an HSV indicator light image, then performs color extraction on the HSV indicator light image according to the color threshold range, and uses the extracted color as a target indicator light image.
It should be noted that, in this embodiment, the terminal device converts the intercepted indicator light image into the HSV indicator light image through the HSV model, where the HSV model is a color model for the look and feel of the user, and each channel of the HSV color space respectively represents Hue (Hue), saturation (Saturation) and brightness (Value), so that brightness, hue and vividness of the color can be intuitively expressed. The HSV color space may be described by a conical space model. V=0 at the apex of the cone, h and S are undefined, representing black; v=max, s=0, h is undefined at the center of the top surface of the cone, representing white. When s=1, v=1, any color represented by H is referred to as a solid color; when s=0, the saturation is 0, the color is the lightest, the lightest is described as gray, the brightness of gray is determined by V, and H is meaningless at this time; when v=0, the color is darkest, and darkest is described as black, and at this time, H and S are meaningless and take on values of black anyway. Hue is a fundamental property of color, representing different colors, that can be used to describe and identify a certain color. The HSV model is very effective in segmenting a given color, representing the color distance by the H and S components, the color distance representing the numerical difference between the two colors.
Illustratively, in this embodiment, when the indicator light of the FSU device emits green light when it is turned on, the color to be extracted by the terminal device is green, the HSV color threshold range preset in the terminal device is [ 35, 43,46 ] - [ 77,255,255 ], and when the pixel color in the HSV indicator light image is within the range, the terminal device extracts the pixel point.
Further, in a possible embodiment, the step of performing color extraction on the HSV indicator image based on the preset color threshold range in step S20 to obtain a target indicator image includes:
step S201, traversing pixel points in the HSV indicator light image, and judging whether the color of the pixel points belongs to a preset color threshold range;
in this embodiment, after the terminal device converts the indicator light image into the HSV indicator light image, all the pixel points in the HSV indicator light image are traversed, and whether the color of each pixel point belongs to a preset color threshold range is judged one by one. By converting the indicator light image from the RGB control to the HSV color space, the accuracy and precision of color recognition in the acquired image can be improved.
Step S202, if the color of the pixel point belongs to the color threshold range, extracting the pixel point;
in this embodiment, the terminal device determines whether the color of each pixel point in the HSV indicator image accords with the color threshold range, and if the color of the pixel point accords with the color threshold range, the terminal device extracts the pixel point.
In another possible embodiment, if the color of the pixel does not meet the color threshold range, the terminal device filters the pixel.
And step S203, generating a target indicator lamp image according to the extracted pixel points.
In this embodiment, after the terminal device traverses all the pixels in the HSV indicator image, the extracted pixels meeting the color threshold range are integrated to be used as the target indicator image.
And step S30, carrying out contour extraction on the target indicator lamp image to obtain a target indicator lamp contour image, and determining the working state of the FSU equipment corresponding to the acquired image according to the area of the contour in the target indicator lamp contour image.
In this embodiment, the terminal device extracts the contour of the target indicator light image, extracts the target indicator light contour image, calculates the area of the contour in the target indicator light contour image, and determines the working state of the FSU device corresponding to the acquired image according to the area of the contour.
In this embodiment, the number of the contours extracted by the terminal device in the target indicator light image may be one or more, and when there are multiple contours extracted in the image, the terminal device sequentially determines the areas of the respective contours.
Further, in a possible embodiment, the step of determining the working state of the FSU device corresponding to the acquired image in step S30 according to the area of the contour in the contour image of the target indicator includes:
step 301, filtering a contour with an area smaller than a preset first area threshold value in the contour image of the target indicator lamp to obtain a target contour;
in this embodiment, a first area threshold is preset in the terminal device, the terminal device calculates the area of the contour in the contour image of the target indicator lamp, when there is only one contour, the terminal device calculates the contour area, when there are multiple contours, the terminal device calculates the respective areas of the contours one by one, the terminal device compares the contour area size in the image with the first area threshold, and when the contour area is smaller than the first area threshold, the terminal device filters the contour corresponding to the contour area.
In this embodiment, the collected image is a real-time image of an FSU device collected in the field, dust, fragments, and the like that may generate interference errors may be on the FSU device in the collected image, and the terminal device filters the outline smaller than the first area threshold value, so as to avoid the identification error of the indicator light caused by the impurities such as dust, fragments, and the like.
The first area threshold preset in the terminal device is the minimum area occupied by the light emitting pixel point of one indicator lamp of the FSU device when the indicator lamp is lighted.
Step S302, judging whether the area of the target contour exceeds a preset second area threshold, wherein the second area threshold is larger than or equal to the first area threshold;
in this embodiment, after filtering the contours smaller than the first area threshold in the target indicator contour image, the terminal device determines whether the area of the remaining contours exceeds a preset second area threshold, where the second area threshold is greater than or equal to the first area threshold.
In this embodiment, the second area threshold preset in the terminal device is the minimum area occupied by the light emitting pixel point of the indicator light position during normal operation of the FSU device.
And step S303, if the area of the target contour exceeds the second area threshold, determining that the working state of the FSU equipment corresponding to the acquired image is a normal state.
In this embodiment, when the terminal device determines that the area of the target contour exceeds the second area threshold, it determines that the working state of the FSU device corresponding to the acquired image is a normal state, and if the area of the target contour does not exceed the second area threshold, it determines that the working state of the FSU device corresponding to the acquired image is an abnormal state.
For example, when the FSU device has at least one indicator light turned on, i.e. is in a normal state, the second area threshold may be equal to the first area threshold, when the FSU device needs at least three indicator lights to be turned on, i.e. is in a normal state, the terminal device compares the sum of the areas of the target profile with the second area threshold, when the second area threshold is three times the first area threshold, and when the sum of the areas of the target profile is greater than the second area threshold, the working state of the FSU device is in a normal state.
Further, in a possible embodiment, the step of performing contour extraction on the target indicator image to obtain a target indicator contour image in step S30 includes:
step S304, identifying the outline edge in the target indicator lamp image based on a Canny edge detection algorithm;
in this embodiment, the terminal device identifies the contour edge in the target indicator image based on a Canny edge detection algorithm, where the Canny edge detection algorithm at least includes the following steps:
step a: and smoothing and removing noise by applying Gaussian filtering, wherein the Gaussian filter discretizes a Gaussian function, and substitutes corresponding abscissa indexes in the filter into the Gaussian function to obtain corresponding values, and the two-dimensional Gaussian function is as follows:
in the two-dimensional gaussian function, (x, y) is the coordinate, and σ is the standard deviation.
Step b: calculating gradient strength and direction: and calculating gradients in the horizontal direction and the vertical direction by adopting a Sobe l filter, wherein a gradient formula is as follows:
in the gradient formula, G x G is a gradient in the horizontal direction y Is a vertical gradient.
Step c: traversing all points on the gradient matrix, and reserving pixels with maximum values in the edge direction, wherein the calculation formula of the pixel values is as follows:
wherein M is T (x, y) is the pixel value remaining at edge (x, y), M (x, y) is the pixel value at edge (x, y), and T is the maximum value of the pixel at edge.
In addition, for better boundary detection, the Canny algorithm can also use a double-threshold method to determine possible boundaries, and the weak boundary is reserved by looking at the weak edge pixels and 8 neighborhood pixels thereof.
And step S305, carrying out contour extraction on the target indicator lamp image according to the contour edge to obtain a target indicator lamp contour image.
In this embodiment, the terminal device performs contour extraction on the target indicator light image according to the contour edge determined by the algorithm, and uses the extracted contour as the target indicator light contour image. Therefore, the terminal equipment determines the working state of the FSU equipment according to the determined area of the outline in the outline image of the target indicator lamp, and the accuracy of the working state judgment of the FSU equipment is improved.
Further, in a possible embodiment, after step S30, the AI intelligent detection method for the working state of the FSU device of the present application further includes:
step S40: if the working state is a normal state, outputting normal prompt information;
step S50: if the working state is abnormal, outputting abnormal prompt information and debugging suggestions.
In this embodiment, after the terminal device determines the detected working state of the FSU device, a detection result is output to the quality inspector, specifically, when the terminal device determines that the working state of the FSU device is a normal state, the terminal device outputs normal prompt information through the user interaction interface, and when the terminal device determines that the working state of the FSU device is an abnormal state, the terminal device outputs abnormal prompt information and debug advice through the user interaction interface.
In another possible embodiment, the terminal device may also output a prompt message to the user terminal of the quality inspection personnel, where the specific form of the prompt message may be one or more of text, voice, table, etc., so that the quality inspection personnel may make a corresponding adjustment to the FSU device in time according to the detection result when receiving the detection result, thereby improving quality inspection efficiency.
In the embodiment, the application identifies the position of the indicator lamp in the collected image based on a preset target identification model, and intercepts an area image where the position of the indicator lamp is located as an indicator lamp image; converting the indicator light image into an HSV indicator light image, and carrying out color extraction on the HSV indicator light image based on a preset color threshold range so as to obtain a target indicator light image; and carrying out contour extraction on the target indicator lamp image to obtain a target indicator lamp contour image, and determining the working state of the FSU equipment corresponding to the acquired image according to the area of the contour in the target indicator lamp contour image.
Specifically, compared with the traditional AI intelligent detection method of the working state of the FSU equipment, the terminal equipment acquires the image of the FSU equipment through the external acquisition equipment, the acquired image comprises the indication lamp of the FSU equipment, after the terminal equipment acquires the image, the terminal equipment identifies the position of the indication lamp in the acquired image based on a preset target identification model, and the area image of the indication lamp in the acquired image is intercepted as an indication lamp image based on the target identification model; then, a color threshold range is preset in the terminal equipment, the terminal equipment converts the intercepted indicator light image into an HSV indicator light image, then the HSV indicator light image is subjected to color extraction according to the color threshold range, and the extracted color is used as a target indicator light image; and finally, the terminal equipment extracts the outline aiming at the target indicator lamp image, extracts the outline image of the target indicator lamp, calculates the area of the outline in the outline image of the target indicator lamp, and determines the working state of the FSU equipment corresponding to the acquired image according to the area of the outline.
Therefore, the application identifies the position of the indicator lamp for collecting the image through the preset target identification model, then extracts HSV color from the identification area, and determines the working state of the FSU equipment by adopting the edge detection technology, thereby greatly improving the efficiency and accuracy of identifying the working state of the FSU equipment.
Further, based on the first embodiment of the AI intelligent detection method for the working state of the FSU device of the present application, a second embodiment of the AI intelligent detection method for the working state of the FSU device of the present application is provided.
In the present embodiment, in the above step S10: before the position of the indicator lamp in the collected image is identified based on the preset target identification model, the AI intelligent detection method of the working state of the FSU equipment further comprises the following steps:
step S60: and inputting training data into an initial target recognition model based on a YOLO target detection algorithm for training to obtain a target recognition model comprising a weight parameter file and a target detection frame, wherein the training data is an FSU device image of the position of the used detection frame marker indicator lamp.
In this embodiment, the terminal device uses a large number of FSU device images that have been used to mark the positions of the indicator lights with the detection frames as training data, inputs the training data into an initial target recognition model based on the YOLO target detection algorithm for training, and the initial target recognition model automatically extracts data features in the training data, and after training, obtains a target recognition model including a weight parameter file and a target detection frame, where the weight parameter file is used to identify the positions of the indicator lights in the FSU device images, and the target detection frame is used to mark the identified positions of the indicator lights, and the target detection frame is preferably rectangular in shape.
Based on this, in one possible embodiment, step S10 includes:
step S101: inputting an acquired image to the target recognition model;
step S102: and identifying the position of the indicator lamp in the acquired image according to the weight parameter file, and labeling the position of the indicator lamp through the target detection frame.
In this embodiment, the terminal device inputs the collected image of the FSU device to be detected into the trained target recognition model, then, based on the model, the terminal device recognizes the collected image according to the weight parameter file in the model, recognizes the position of the indicator lamp, and marks the position of the indicator lamp through the target detection frame.
In this embodiment, the average similarity between the recognized in-frame image and the tag region in the target recognition model is displayed next to the target detection frame labeled with the position of the indicator light.
As shown in fig. 2, in this embodiment, a terminal device collects a training model of a picture, then identifies a position of an indicator light of a device to be detected by using the trained model, intercepts an identified indicator light area, then converts the intercepted indicator light area into an HSV color space, extracts HSV color of the intercepted area, identifies a contour by using an edge detection algorithm, and filters an interference contour by using an area size, thereby judging an overall working state of the FSU device based on the filtered contour picture.
In addition, in a feasible embodiment, the state detection method can also be applied to various indicator lamp judgment scenes, such as equipment alarm, equipment working state monitoring and the like.
Therefore, model training is performed through the YOLO target detection algorithm, the algorithm has the advantage of rapidly and accurately identifying a target area, the target detection model obtained through training performs position detection on the FSU device indicator lamp, the state detection efficiency of the FSU device is improved, and the labor cost is reduced.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (7)

1. The AI intelligent detection method for the working state of the FSU equipment is characterized by comprising the following steps of:
identifying the position of an indicator lamp in the acquired image based on a preset target identification model, and intercepting an area image where the position of the indicator lamp is located as an indicator lamp image;
converting the indicator light image into an HSV indicator light image, and carrying out color extraction on the HSV indicator light image based on a preset color threshold range so as to obtain a target indicator light image;
and carrying out contour extraction on the target indicator lamp image to obtain a target indicator lamp contour image, and determining the working state of the FSU equipment corresponding to the acquired image according to the area of the contour in the target indicator lamp contour image.
2. The AI intelligent detection method for an FSU device operating state according to claim 1, wherein said step of determining the operating state of the FSU device corresponding to the acquired image according to the area of the outline in the outline image of the target indicator comprises:
filtering a contour with an area smaller than a preset first area threshold value in the contour image of the target indicator lamp to obtain a target contour;
judging whether the area of the target contour exceeds a preset second area threshold, wherein the second area threshold is larger than or equal to the first area threshold;
and if the area of the target contour exceeds the second area threshold, determining that the working state of the FSU equipment corresponding to the acquired image is a normal state.
3. The AI intelligent detection method for an FSU device operating state of claim 1, wherein prior to the step of identifying the position of the indicator light in the captured image based on the preset object identification model, the method further comprises:
and inputting training data into an initial target recognition model based on a YOLO target detection algorithm for training to obtain a target recognition model comprising a weight parameter file and a target detection frame, wherein the training data is an FSU device image of the position of the used detection frame marker indicator lamp.
4. The AI-intelligent detection method for an operating state of an FSU device according to claim 3, wherein the step of identifying the position of the indicator light in the captured image based on the preset object identification model comprises:
inputting an acquired image to the target recognition model;
and identifying the position of the indicator lamp in the acquired image according to the weight parameter file, and labeling the position of the indicator lamp through the target detection frame.
5. The AI intelligent detection method for the FSU device working state of claim 1, wherein the step of performing color extraction on the HSV indicator light image based on the preset color threshold range to obtain a target indicator light image includes:
traversing pixel points in the HSV indicator light image, and judging whether the color of the pixel points belongs to a preset color threshold range or not;
if the color of the pixel point belongs to the color threshold range, extracting the pixel point;
and generating a target indicator lamp image according to each extracted pixel point.
6. The AI intelligent detection method for an FSU device operating state of claim 1, wherein said step of performing contour extraction for said target indicator image to obtain a target indicator contour image comprises:
identifying a contour edge in the target indicator image based on a Canny edge detection algorithm;
and carrying out contour extraction on the target indicator lamp image according to the contour edge to obtain a target indicator lamp contour image.
7. The AI intelligent detection method of the operating state of the FSU device according to any one of claims 1 to 6, further comprising, after the step of determining the operating state of the FSU device corresponding to the acquired image from the area of the outline in the target indicator outline image:
if the working state is a normal state, outputting normal prompt information;
if the working state is abnormal, outputting abnormal prompt information and debugging suggestions.
CN202310782184.0A 2023-06-28 2023-06-28 AI intelligent detection method for FSU equipment working state Pending CN116863333A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310782184.0A CN116863333A (en) 2023-06-28 2023-06-28 AI intelligent detection method for FSU equipment working state

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310782184.0A CN116863333A (en) 2023-06-28 2023-06-28 AI intelligent detection method for FSU equipment working state

Publications (1)

Publication Number Publication Date
CN116863333A true CN116863333A (en) 2023-10-10

Family

ID=88224427

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310782184.0A Pending CN116863333A (en) 2023-06-28 2023-06-28 AI intelligent detection method for FSU equipment working state

Country Status (1)

Country Link
CN (1) CN116863333A (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107067595A (en) * 2017-04-28 2017-08-18 南京国电南思科技发展股份有限公司 State identification method, device and the electronic equipment of a kind of indicator lamp
CN107194395A (en) * 2017-05-02 2017-09-22 华中科技大学 A kind of object dynamic positioning method based on colour recognition and contours extract
CN111259892A (en) * 2020-01-19 2020-06-09 福建升腾资讯有限公司 Method, device, equipment and medium for inspecting state of indicator light
CN111639620A (en) * 2020-06-08 2020-09-08 深圳航天智慧城市系统技术研究院有限公司 Fire disaster analysis method and system based on visible light image recognition
CN112560816A (en) * 2021-02-20 2021-03-26 北京蒙帕信创科技有限公司 Equipment indicator lamp identification method and system based on YOLOv4
CN113345036A (en) * 2021-05-24 2021-09-03 广西电网有限责任公司电力科学研究院 HSV (hue, saturation, value) feature transformation based indicator lamp state identification method
CN114066862A (en) * 2021-11-22 2022-02-18 福建省海峡智汇科技有限公司 Indicator identification method and system based on color gamut and contour characteristics
CN114757962A (en) * 2022-04-18 2022-07-15 智洋创新科技股份有限公司 Overhead line forest fire identification method
CN114820676A (en) * 2022-05-30 2022-07-29 深圳市科荣软件股份有限公司 Equipment running state identification method and device
CN115294352A (en) * 2022-06-30 2022-11-04 慈溪市输变电工程有限公司 Intelligent switch cabinet state identification system and method based on image identification
CN115642877A (en) * 2022-11-04 2023-01-24 重庆中电自能科技有限公司 Photovoltaic module abnormal shielding detection method and system based on deep learning
CN115760684A (en) * 2022-09-22 2023-03-07 超聚变数字技术有限公司 Method for adjusting test threshold interval of mainboard LED lamp, test method and device
CN116052041A (en) * 2022-12-16 2023-05-02 北方工业大学 Indicating lamp state identification method based on depth network

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107067595A (en) * 2017-04-28 2017-08-18 南京国电南思科技发展股份有限公司 State identification method, device and the electronic equipment of a kind of indicator lamp
CN107194395A (en) * 2017-05-02 2017-09-22 华中科技大学 A kind of object dynamic positioning method based on colour recognition and contours extract
CN111259892A (en) * 2020-01-19 2020-06-09 福建升腾资讯有限公司 Method, device, equipment and medium for inspecting state of indicator light
CN111639620A (en) * 2020-06-08 2020-09-08 深圳航天智慧城市系统技术研究院有限公司 Fire disaster analysis method and system based on visible light image recognition
CN112560816A (en) * 2021-02-20 2021-03-26 北京蒙帕信创科技有限公司 Equipment indicator lamp identification method and system based on YOLOv4
CN113345036A (en) * 2021-05-24 2021-09-03 广西电网有限责任公司电力科学研究院 HSV (hue, saturation, value) feature transformation based indicator lamp state identification method
CN114066862A (en) * 2021-11-22 2022-02-18 福建省海峡智汇科技有限公司 Indicator identification method and system based on color gamut and contour characteristics
CN114757962A (en) * 2022-04-18 2022-07-15 智洋创新科技股份有限公司 Overhead line forest fire identification method
CN114820676A (en) * 2022-05-30 2022-07-29 深圳市科荣软件股份有限公司 Equipment running state identification method and device
CN115294352A (en) * 2022-06-30 2022-11-04 慈溪市输变电工程有限公司 Intelligent switch cabinet state identification system and method based on image identification
CN115760684A (en) * 2022-09-22 2023-03-07 超聚变数字技术有限公司 Method for adjusting test threshold interval of mainboard LED lamp, test method and device
CN115642877A (en) * 2022-11-04 2023-01-24 重庆中电自能科技有限公司 Photovoltaic module abnormal shielding detection method and system based on deep learning
CN116052041A (en) * 2022-12-16 2023-05-02 北方工业大学 Indicating lamp state identification method based on depth network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刘春晖 等: "汽车诊断检测仪器与设备使用指南", 31 August 2019, 北京:机械工业出版社, pages: 43 - 45 *

Similar Documents

Publication Publication Date Title
CN111275759B (en) Transformer substation disconnecting link temperature detection method based on unmanned aerial vehicle double-light image fusion
CN109271844B (en) Electrical cabinet electrical symbol recognition method based on OpenCV
CN101599175B (en) Detection method for determining alteration of shooting background and image processing device
CN107506760A (en) Traffic signals detection method and system based on GPS location and visual pattern processing
CN107392116A (en) A kind of indicator lamp recognition methods and system
CN112364740B (en) Unmanned aerial vehicle room monitoring method and system based on computer vision
CN105844242A (en) Method for detecting skin color in image
CN110569774B (en) Automatic line graph image digitalization method based on image processing and pattern recognition
CN113034624A (en) Temperature early warning image identification method, system, equipment and storage medium based on temperature sensing color-changing adhesive tape
CN108154496B (en) Electric equipment appearance change identification method suitable for electric power robot
CN113688817A (en) Instrument identification method and system for automatic inspection
CN112598674B (en) Image processing method and device for vehicle and vehicle
KR102366396B1 (en) RGB-D Data and Deep Learning Based 3D Instance Segmentation Method and System
CN111401341A (en) Deceleration strip detection method and device based on vision and storage medium thereof
CN113033548A (en) Image recognition method for unit running state judgment and monitoring point alignment
CN115631350B (en) Method and device for identifying colors of canned image
CN116863333A (en) AI intelligent detection method for FSU equipment working state
KR20210042563A (en) Apparatus and method for separating foreground from background
CN112418226B (en) Method and device for identifying opening and closing states of fisheyes
CN115619725A (en) Electronic component detection method and device, electronic equipment and automatic quality inspection equipment
CN112488031A (en) Safety helmet detection method based on color segmentation
CN112364693A (en) Barrier identification method, device and equipment based on binocular vision and storage medium
CN112507838A (en) Pointer meter identification method and device and power inspection robot
CN111079713A (en) Method for extracting pedestrian color features and terminal equipment
CN112101107A (en) Intelligent identification method for intelligent network combined model type vehicle-in-loop simulation traffic signal lamp

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination