CN112287915B - Equipment fault warning method and system based on deep learning - Google Patents

Equipment fault warning method and system based on deep learning Download PDF

Info

Publication number
CN112287915B
CN112287915B CN202011572332.9A CN202011572332A CN112287915B CN 112287915 B CN112287915 B CN 112287915B CN 202011572332 A CN202011572332 A CN 202011572332A CN 112287915 B CN112287915 B CN 112287915B
Authority
CN
China
Prior art keywords
parameter
equipment
parameters
picture
machine room
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011572332.9A
Other languages
Chinese (zh)
Other versions
CN112287915A (en
Inventor
张蔓琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing mengpa Xinchuang Technology Co., Ltd
Shanghai mengpa Intelligent Technology Co.,Ltd.
Original Assignee
Beijing Mengpa Xinchuang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Mengpa Xinchuang Technology Co ltd filed Critical Beijing Mengpa Xinchuang Technology Co ltd
Priority to CN202011572332.9A priority Critical patent/CN112287915B/en
Publication of CN112287915A publication Critical patent/CN112287915A/en
Application granted granted Critical
Publication of CN112287915B publication Critical patent/CN112287915B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The invention relates to an equipment fault warning method and system based on deep learning. The method comprises the following steps: configuring relevant parameters of the inspection robot; driving the inspection robot to perform field image data acquisition of a machine room based on the configuration parameters; storing the acquired field image data of the machine room into a database; reading a plurality of collected machine room field image data from a database, and preprocessing the plurality of machine room field image data; training and establishing a deep learning algorithm model; and controlling the inspection robot in real time based on the trained model to determine whether the state of the equipment in the house is abnormal. Compared with the traditional scheme, the scheme of the invention can carry out rapid classification and accurate identification on the problems by carrying out image identification through a deep learning algorithm, shortens the fault location period of the problems and greatly improves the problem solving efficiency.

Description

Equipment fault warning method and system based on deep learning
Technical Field
The invention relates to the technical field of machine room equipment monitoring, in particular to an equipment fault warning method and system based on deep learning.
Background
The IT operation and maintenance work of a great number of enterprises mainly depends on the enterprise to set maintenance teams, a manual regular inspection mode is adopted, the inspection period is long, the inspection result depends on the personal experience of maintenance personnel, and problems can not be timely and effectively found. In a manual inspection mode, even if a problem is found, accurate positioning of the problem cannot be timely obtained, and delay is caused to repair and solve the problem. In addition, most enterprises rely on the technical maintenance personnel of the vendors of the sales equipment and systems to locate problems and repair failures. However, there is also a problem of relatively long periods from finding a problem to vendor maintenance personnel location. The above methods can not quickly locate the problem according to historical big data and provide a repair suggestion.
With the rapid expansion of the internet data scale and the complex and diversified IT service types, the traditional IT operation and maintenance automation system based on the artificial rule making gradually becomes unconscious, the bottleneck thereof is in the human brain, and an expert who is engaged in the IT operation and maintenance for a long time must manually summarize the repeated and traceable phenomena to form the rule. However, this simple, human-based approach to rule making is increasingly inadequate for the increasingly complex IT operation and maintenance status.
Disclosure of Invention
In order to solve part of problems in the prior art, the invention provides an equipment fault warning method based on deep learning, which can avoid special conditions occurring when data are collected through image capturing modes at different angles, and carry out indicator light analysis on image data through an image recognition algorithm. By classifying the data obtained by the image recognition algorithm, the accurate position of abnormal data (namely fault equipment/components) can be positioned, so that a machine room manager can timely make a corresponding processing scheme according to an analysis result.
In one aspect, the present disclosure provides an apparatus fault warning method based on deep learning, including the following steps:
configuring parameters of the inspection robot, including configuring equipment indicator light U number information;
driving the inspection robot to perform machine room field picture acquisition based on the configuration parameters;
storing the collected field pictures of the plurality of machine rooms into a database;
reading a plurality of machine room field pictures collected to equipment from the database, and preprocessing the machine room field pictures;
training a deep learning algorithm model, comprising:
extracting the features of the preprocessed field pictures of the plurality of machine rooms through a trunk feature extraction network to obtain a shared feature layer;
intercepting a local feature layer through convolution and a shared feature layer, and then performing regional pooling on the intercepted local feature layer; and
performing classification prediction and regression prediction on all the pooled local feature layers so as to predict whether each local feature layer contains an indicator light; and
and controlling the inspection robot in real time based on the trained model to determine whether the state of the equipment in the house is abnormal.
In one embodiment, parameters of the inspection robot are configured, and the parameters at least comprise: pan-tilt/lifter detection points, camera parameters, component operation rules and alarm rules.
In another embodiment, the position of the analysis area in the computer room scene picture is confirmed by configuring the parameters of the inspection robot.
In one embodiment, the equipment indicator light U number information configuration comprises:
reserving allowance for the analysis area according to the machine room field picture to select U number information of the equipment indicator lamps;
acquiring algorithm parameters according to the position of the actual indicator light in the picture;
and assigning values to the algorithm parameters, and confirming the position of the analysis area in the picture through the algorithm parameter values so as to be used for positioning pixel position points in the picture.
In another embodiment, the algorithm parameters include: the device end U number parameter, the device start U number parameter, the x-axis end pixel parameter, the y-axis end pixel parameter, the x-axis start pixel parameter and the y-axis start pixel parameter.
In one embodiment, the pan/tilt/boom detection point setting comprises: and setting the tripod head or lifting rod detection point, wherein the tripod head detection point comprises a video camera and thermal imaging, and the lifting rod detection point comprises a camera.
In another embodiment, the camera/camera parameter configuration includes setting parameters through a parameter configuration page, switching the exposure mode to manual and setting shutter parameter values, and the larger the shutter parameter value, the longer the exposure time, the brighter the captured picture.
In one embodiment, the component operation rule configuration comprises: the component operation rule configuration comprises: and selecting the U number information of the equipment and the equipment indicator lamps on a parameter configuration page according to the real-time visual angle picture of the inspection robot, and designating the components to set the component operation rules.
In another embodiment, the alarm rule configuration comprises: and setting alarm index rules aiming at the setting component.
In another aspect, the present disclosure provides a system using the aforementioned device fault detection method and its various embodiments, comprising:
a parameter configuration module for configuring parameters of the inspection robot, the parameters at least including: the method comprises the following steps of (1) equipment indicator lamp U number information, pan-tilt/lifting rod detection points, camera parameters, component operation rules and alarm rules;
the image acquisition module is used for driving the inspection robot to carry out on-site image data acquisition of a machine room based on the parameters;
the database is used for storing the acquired field image data of the plurality of machine rooms;
the preprocessing module is used for reading the collected field image data of the plurality of machine rooms from the database and preprocessing the field image data of the plurality of machine rooms;
a model training module for training a deep learning algorithm model, comprising:
the feature extraction module is used for extracting features of the preprocessed on-site image data of the plurality of machine rooms through a main feature extraction network to obtain a shared feature layer;
the pooling module is used for intercepting the local feature layer through the convolution and shared feature layer and then pooling the intercepted local feature layer in a subarea manner; and
the prediction module is used for carrying out classification prediction and regression prediction on all the pooled local feature layers so as to predict whether each local feature layer contains an indicator light; and
and the control module is used for controlling the inspection robot in real time based on the trained model so as to determine whether the state of the equipment in the house is abnormal.
The invention can realize relatively accurate image positioning through a deep learning algorithm. Compared with the traditional method, the method has great improvement on the recognition and positioning period of the problem, greatly improves the problem positioning and solving efficiency of machine room management personnel, and has important significance for guaranteeing the service quality of the large-scale Internet.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present disclosure will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the present disclosure are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar or corresponding parts and in which:
FIG. 1 is a flow chart illustrating a deep learning based device failure warning method according to an embodiment of the present invention;
FIG. 2 is a flow diagram illustrating a deep learning algorithm according to an embodiment of the invention; and
fig. 3 is a schematic diagram illustrating a system using a deep learning based device alert method according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, and "a plurality" typically includes at least two.
It should be understood that although the terms first, second, third, etc. may be used to describe … … in embodiments of the present invention, these … … should not be limited to these terms. These terms are used only to distinguish … …. For example, the first … … can also be referred to as the second … … and similarly the second … … can also be referred to as the first … … without departing from the scope of embodiments of the present invention.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that an article or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such article or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in the article or device in which the element is included.
Alternative embodiments of the present invention are described in detail below with reference to the accompanying drawings.
The invention relates to an equipment fault alarm analysis method based on a deep learning model, which can accurately identify an equipment or component alarm indicator lamp and locate the alarm specific equipment or component position. The basic idea is as follows: the image data of the equipment or the component collected by the inspection robot is analyzed and processed through the deep learning model, and the alarm of the equipment or the component can be identified and classified, so that the alarm identification and fault location in large-scale equipment are realized.
Fig. 1 is a flow chart illustrating a deep learning based device failure alerting method 100 according to an embodiment of the present invention. As shown in fig. 1, in step S110, the method 100 for alarming equipment failure according to the present invention configures parameters of the inspection robot, including configuring information on the number U of the equipment indicator lamps. The parameters may further include at least: and setting the parameters of the tripod head/lifting rod detection point, the parameters of the camera, the component operation rules and the alarm rules. The positions of the analysis areas in the collected machine room field pictures can be confirmed by configuring the parameters of the inspection robot.
In one embodiment, the pan/tilt/boom detection points may be configured according to the height of the detection points. For example, a camera and thermal imaging may be arranged at the pan-tilt for acquiring device image data and associated thermal imaging data below 1.5 meters; and a camera is arranged at the lifting rod for acquiring the image data of the equipment over 1.5 meters. The parameters of the camera or cameras of the head or mast can then be configured. In an application scenario, camera parameters (such as exposure, brightness, etc.) can be set through a parameter configuration page of the background client. The exposure mode may be set manually and the shutter parameter value may be set. In one embodiment, the value of the shutter parameter is relatively larger, the exposure time is relatively longer, and the image frame is relatively brighter.
Further, parameter configuration can be carried out on the U number information of the equipment indicator lamp. In one embodiment, the equipment indicator light U number information parameter configuration can be realized by reserving relevant allowance for an analysis area according to a to-be-inspected machine room field real-time picture and combining an empirical value to select the equipment indicator light U number information. In an application scenario, the real-time picture of the pan/tilt/stick camera or the camera of the inspection robot may be utilized, and the device indicator U number information (or may be regarded as the height information of the device indicator) of the analysis area of the real-time picture may be determined by using an empirical value according to the chassis height and the device height of the inspection robot.
In one embodiment, the algorithm parameters (denoted as "u _ params") may be obtained based on the actual indicator light position in the image data (i.e., picture). The algorithm parameters may include a device end U number parameter (denoted "end _ U"), a device start U number parameter (denoted "start _ U"), an x-axis end pixel point parameter (denoted "end _ imgCols"), a y-axis end pixel point parameter (denoted "end _ imgRows"), an x-axis start pixel point parameter (denoted "start _ imgCols"), and a y-axis start pixel point parameter (denoted "start _ imgRows"). Further, the six parameters may be assigned. Two of the parameters "start _ U" and "end _ U" may be used to locate the U number information of the preset point (i.e., the parsing area) device, and the other four parameters "start _ imgCols, start _ imgRows, end _ imgCols and end _ imgRows" are used to confirm the location of the parsing area in the picture. The position of the resolution area (or the area to be detected) in the picture can be confirmed by the parameter values. For example, the pixel location point in the picture can be located by moving the mouse using the frame function in the drawing.
After the information parameters of the number U of the indicator lights of the equipment are set, the operation rules of the components can be configured. In an application scene, the equipment information and the U number information of the equipment indicator lamps can be selected on a parameter configuration page according to the current real-time visual angle picture of the inspection robot, and the component information is specified to set the component operation rule. Optionally, alarm rules may be configured, which includes alarm indicator rule configuration for set components. For example, in setting a component indicator light color rule, a red light indicates a component hardware failure, a yellow light indicates a component software failure, and a blue or green light indicates that the component is operating properly. The determination condition or the determination value of the index rule can be defined by self according to the application requirement, and the disclosure is not limited.
Next, at step S120, the method 100 drives the inspection robot to perform the machine room field image data acquisition based on the preset parameters. The inspection robot can be controlled to collect various information data of the machine room by configuring various detection point parameter information. For example, the location of cabinets in a room, and the image acquisition of equipment and component indicator lights at various angles. At step S130, the method 100 stores the data of the several device field images collected by the inspection robot into a database to form a raw data set. At step S140, the method 100 may read the collected several machine room field image data from the database and pre-process the several machine room field image data. In one embodiment, the collected image data may be read from a database and the acquired image data may be pre-processed to ensure that the characteristics (e.g., color distribution, size, or darkness) of each image data are as consistent as possible.
Further, the method 100 proceeds to step S150, training a deep learning algorithm model, including: step S151, step S152, and step S153. In step S151, the method 100 may perform feature extraction on the preprocessed several machine room field image data through a backbone feature extraction network to obtain a shared feature layer. In one embodiment, in the process of extracting features from the preprocessed image data (picture), the target (indicator light) may be extracted and separated from the high-dimensional feature space by using a method such as a color moment method, a model method, a geometric parameter method, and the like. Further, feature extraction may be performed through a main feature extraction Network, such as VGG ("Visual Geometry Group"), ResNet ("Residual neural Network"), and the like, to obtain a shared feature layer. Next, at step S152, the method 100 may intercept the local feature layer through convolution (e.g., bottleneck convolution method) and the shared feature layer, and perform regional pooling on the intercepted local feature layer. At step S153, the method 100 may perform classification prediction and regression prediction on all the pooled local feature layers, thereby predicting whether each local feature layer contains an indicator light.
Finally, at step S160, the method 100 may perform real-time control on the inspection robot based on the trained model to determine whether the state of the equipment in the house is abnormal. In an application scenario, the output result of the trained model can be utilized to perform data processing on the feature values of the identified image data to perform matching positioning, deployment of an online module and visual display. Specifically, the timing task may be set to execute a defined data processing method. Feature values of the image data are introduced into a defined classifier for distinguishing to identify the category of the image data (such as a red indicator light or a yellow indicator light). After the target (i.e., indicator light) is identified, the target may be located using a model (e.g., a description generation model). After the target is positioned, the position and the result of the indicator light can be displayed by deploying the on-line module, so that trend analysis and prediction of the importance degree of each index of the machine room environment are realized, and effective management and application of data are facilitated. Furthermore, the data analysis result is displayed on a visual interface, and the fault component can be quickly found, diagnosed and positioned through the data analysis result, so that the overall operation and maintenance efficiency is improved.
FIG. 2 is a flow diagram illustrating a deep learning algorithm 200 according to an embodiment of the invention. Fig. 2 is an exemplary description of a process of the deep learning algorithm based on the contents of the deep learning algorithm model established in fig. 1. In view of this, the related art details described in fig. 1 also apply to the contents of fig. 2.
As shown in fig. 2, at step S140, the algorithm 200 performs a pre-processing operation on the acquired image data (picture) obtained from the database. In one embodiment, the pictures can be screened for sharpness and color reduction. Such as rejecting the picture with unclear display or distorted color of the indicator light. Then, the picture is cut and marked by using special software (such as labelimg, a piece of software specially used for labeling) to finish the preprocessing of the picture. For example, a picture with a size of 1200x1800 is input, and the short side of the captured picture can be fixed to 600 by a preprocessing operation, that is, the original picture is re-fixed to 600x900 without distortion. By performing the picture preprocessing operation, the picture quality can be ensured, and the subsequent computational overhead can be reduced.
Next, in step S151, the algorithm 200 performs feature extraction on the preprocessed several machine room field image data through the backbone feature extraction network to obtain a shared feature layer. In one application scenario, a ResNet50 network may be utilized to perform a backbone feature extraction operation. ResNet50 includes two basic blocks, a convolutional residual block and an identical residual block. The convolutional residual block can be used to modifyThe dimensionality of the network is varied, and the identity residual blocks are used to deepen the network. In the process of extracting the trunk features by using the convolution residual block and the identity residual block, the original picture can be cut after the original picture is fixed without distortion. The picture length and width dimensions involved four changes. For example, the picture size after the preprocessing operation is 600x900, and the picture size after four times of picture size changes are 600x 600
Figure 68936DEST_PATH_IMAGE001
150 x 150
Figure 517234DEST_PATH_IMAGE001
75 x 75
Figure 359289DEST_PATH_IMAGE001
38 x 38. The last picture output (i.e., 38 x 38 is the shared feature layer).
After obtaining the shared feature layer, at step S1521, the algorithm 200 may predict whether the indicator light is actually contained in the shared feature layer (picture frame) by a convolution method (e.g., bottleneck convolution structure) to obtain a plurality of suggestion boxes. Those skilled in the art will appreciate that the suggested boxes are coarse filters of objects in the picture, and that the suggested boxes are not uniform in size. Next, at step S1522, the algorithm 200 transmits the shared feature layer and the suggestion box together to the pooling layer, and may perform feature truncation on the shared feature layer by using the obtained plurality of suggestion boxes to obtain a plurality of local feature layers. For example, the shared feature layer is a picture with a size of 38 × 38, and after feature extraction is performed through a suggestion box, a picture with a size of 14 × 14 (a local feature layer) is obtained. Note that the sizes of the plurality of local feature layers obtained at this time are also not uniform. Further, in step S1523, the algorithm 200 may combine the plurality of local feature layers obtained after feature extraction with the feature region pooling layer to perform regional pooling, where the sizes of the local feature layers obtained after pooling are the same. Alternatively, after step S151 is executed, in a case where it is determined that the obtained shared feature layer actually includes an indicator light, step S1521 and step S1522 may not be executed, and step S1523 may be directly executed, so that the image data of the shared feature layer is directly combined with the feature region pooling layer to perform pooling operation, thereby improving the calculation efficiency.
After performing the pooling operation, the algorithm 200 proceeds to step S153, which includes two steps S1531 and S1532. The two steps are respectively executed in a full connection layer, and regression prediction and classification prediction are respectively executed. At step S1531, the algorithm 200 performs regression prediction on the pooled local feature layers. The aforementioned suggestion blocks may be adjusted by regression prediction to obtain a final prediction block. The positions of the screened prediction frames on the original image are obtained. It will be appreciated by those skilled in the art that the aforementioned recommendation boxes obtained after pooling are only a rough screening, and that the results are not necessarily accurate and require further optimization. When predicting the framing object, there is a problem of inaccurate positioning of the frame, i.e., errors in the position of the frame. Due to the existence of errors, the positions of the frames can be finely adjusted by a linear regression method through regression prediction so as to enable the predicted frames to be closer to the real frames. That is, the predicted value is infinitely close to the true value by linear regression. The linear regression can be represented by the following formula: y = WX
Where X is the n-dimensional feature vector (X) for a given input in linear regression1,x2,x3……xn),W=(w1,w2,w3,…wn) Is a set of parameters used for learning.
Assuming that A represents the original box (i.e., the predicted box) and G represents the target box (i.e., the real box), a relationship is sought such that the input original box A is mapped to a regression box G' that is closer to the real box G. A four-dimensional vector (x, y, w, h) representation is typically used for a window (i.e., box), where x and y represent the abscissa and ordinate of the center point of the window, w represents the window width, and h represents the window height. The coordinates AA of the original box A and the coordinates GT of the target box G may be expressed using the following sub-equations:
AA =(Ax,Ay, Aw,Ah
GT = (Gx,Gy,Gw,Gh
wherein A isxAnd AyIs the abscissa and ordinate of the center point of the original frame A, AwIs the width of the original frame A, AhIs the height of the original frame A; gxAnd GyAs the abscissa and ordinate of the object frame G, GwIs the width of the target frame G, GhIs the height of the target box G.
A transformation F needs to be found such that:
F(Ax,Ay,Aw,Ah)=(Gx’, Gy’, Gw’, Gh’)
wherein G isx' and Gy'is the abscissa and ordinate of the center point of the regression box G', Gw'is the width of the regression box G', Gh'is the height of the regression box G'.
It is noted that (G)x’, Gy’, Gw’, Gh’)≈(Gx,Gy,Gw, Gh). In this disclosure, the shared feature layer may be defined as input X, as Φ. Using the transformation quantity between the coordinate AA of the training transmitted into the original frame A and the coordinate GT of the target frame G, i.e. the translation quantity and transformation scale of the original frame relative to the target frame, as (t)x,ty,tw,th) And (4) showing. Four objective functions are output, dx (A), dy (A), dw (A) and dh (A), according to the linear regression method described above. The objective function can be obtained by:
d* (A)=W*T·Φ(A)
where Φ (a) is a feature vector composed of shared feature layers corresponding to the original frame a, W is a parameter to be learned, and d (a) is a predicted value obtained (x may represent x, y, W, h, and each transformation corresponds to an objective function).
Specifically, first, the abscissa G of the center point of the regression frame G' is obtained by translationx' and ordinate Gy’:
Gx’= Aw·dx(A)+Ax
Gy’= Ah·dy(A)+Ay
Then, scaling is carried out to obtain the width G of the regression frame Gw' and height Gh’:
Gw’= Aw·exp(dw(A))
Gh’= Ah·exp(dh(A))
In order to minimize the difference between the objective function d (a) (i.e., the predicted value) and the true value t, the Loss function Loss can be designed to be obtained by the following formula:
Figure 911755DEST_PATH_IMAGE002
where an N-dimensional feature vector is input, N being a positive integer, i =1, 2 … N. Feature selection can then be performed by regularizing the loss function (i.e., optimizing the objective) to avoid model overfitting. The loss function optimization objective may be obtained using the following equation:
Figure 522865DEST_PATH_IMAGE003
where λ is a hyper-parameter, argmin is the value of the variable that minimizes this latter equation, and t may be represented by the following equation:
Figure 142065DEST_PATH_IMAGE004
Figure 300776DEST_PATH_IMAGE005
wherein (t)x,ty,tw,th) Is the transformation quantity between the coordinate AA of the original frame A and the coordinate GT of the target frame G, (x, y, w, h) is the four-dimensional vector of the coordinate of the positioning frame, xaAnd yaIs the abscissa and ordinate of the center point of the original frame A, waIs the width of the original frame A, haIs the height of the original box a. Further, w may be obtained by a gradient descent method or a least square method.
According to the scheme of the present disclosure, in order to make the error between the original frame (predicted frame) a and the target frame (real frame) G as small as possible, the original frame a may be fine-tuned through regression prediction. First, the feature Φ (a) is extracted, and then the change d (a) = W is predictedTPhi (A), and finally translating and scaling the original frame to obtain Gx’, Gy’, Gw’, Gh' to complete the regression of the frame, the position of the frame can be corrected.
Next, at step S1532, the algorithm 200 performs classification prediction on the result of regression prediction. The category of the object inside the prediction frame can be determined through classification prediction, and the finally obtained picture is classified. I.e. to determine whether the inside of the prediction box really contains an indicator light. In categorical prediction, the probability of a category is calculated using softmax. softmax is a classifier, and the calculation formula for multi-class softmax is:
Figure 515988DEST_PATH_IMAGE006
wherein, each suggestion box (i.e. local feature layer) obtained after pooling is an array, i represents the ith element in the array, and SiDenotes the softmax value of the element, eiIndex representing the ith element, j represents the number of elements in the array, ejIndicating the index of any one element in the array.
Having the maximum S in the arrayiThe elements of the value determine the classification of the suggestion box. If it is a multi-class, then S is selectediThe first few probability elements with the largest value are sufficient. In addition, it can be understood by those skilled in the art that the aforementioned two steps S1531 and S1532 are not limited by the described step number order, but can be performed in other orders. Finally, at step S210, the algorithm 200 outputs the training results of the deep learning described above.
FIG. 3 is a schematic diagram illustrating a system 300 using a deep learning based device alert method according to an embodiment of the present invention. Those skilled in the art will appreciate from the following description that the system of fig. 3 supports the aspects of the present disclosure described in conjunction with fig. 1 and 2.
As shown in FIG. 3, a system 300 for using a deep learning based device alert method may include a parameter configuration module 310, an image acquisition module 320, a database 330, a pre-processing module 340, a model building module 350, a model training module 360, and a control module 370. The parameter configuration module 310 may be used to configure parameters of the inspection robot. The parameters at least include: the device comprises equipment indicator light U number information, a tripod head/lifting rod detection point, camera parameters, component operation rules and alarm rules. The image acquisition module 320 is used for driving the inspection robot to perform field image data acquisition of the machine room based on the parameters. The database 330 is used for storing a plurality of collected machine room field image data. The preprocessing module 340 is configured to read the collected multiple machine room field image data from the database, and preprocess the multiple machine room field image data.
Further, the model training module 350 is used for training the deep learning algorithm model. The model training module may include a feature extraction module 351, a pooling module 352, and a prediction module 353. The feature extraction module 351 may be configured to perform feature extraction on the preprocessed multiple machine room field image data through a backbone feature extraction network, so as to obtain a shared feature layer. The pooling module 352 may be configured to intercept the local feature layer through convolution and a shared feature layer, and perform regional pooling on the intercepted local feature layer. The prediction module 353 may be configured to perform classification prediction and regression prediction on all the pooled local feature layers, so as to predict whether each local feature layer contains an indicator light. Then, the control module 360 may be configured to control the inspection robot in real time based on the trained model to determine whether the state of the equipment in the house is abnormal.
It is to be understood that the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules described in the embodiments of the present disclosure may be implemented by software or hardware. Wherein the name of a module in some cases does not constitute a limitation on the module itself.
The foregoing describes preferred embodiments of the present invention, and is intended to provide a clear and concise description of the spirit and scope of the invention, and not to limit the same, but to include all modifications, substitutions, and alterations falling within the spirit and scope of the invention as defined by the appended claims.

Claims (7)

1. An equipment fault alarming method based on deep learning is characterized by comprising the following steps:
the parameter to patrolling and examining robot is configured including configuration equipment pilot lamp U number information, the parameter still includes at least: the method comprises the following steps of (1) detecting points of a cradle head/lifting rod, parameters of a camera, component operation rules and alarm rules;
the position of an analysis area in a machine room field picture is confirmed by configuring the parameters of the inspection robot;
the equipment indicator light U number information configuration comprises the following steps:
reserving allowance for the analysis area according to the machine room field picture to select U number information of the equipment indicator lamps;
acquiring algorithm parameters according to the position of the actual indicator light in the picture;
assigning values to the algorithm parameters, and confirming the position of the analysis area in the picture through the algorithm parameter values so as to position pixel position points in the picture;
driving the inspection robot to perform machine room field picture acquisition based on the configuration parameters;
storing the collected field pictures of the plurality of machine rooms into a database;
reading a plurality of machine room field pictures collected to equipment from the database, and preprocessing the machine room field pictures;
training a deep learning algorithm model, comprising:
extracting the features of the preprocessed field pictures of the plurality of machine rooms through a trunk feature extraction network to obtain a shared feature layer;
intercepting a local feature layer through convolution and a shared feature layer, and then performing regional pooling on the intercepted local feature layer; and
performing classification prediction and regression prediction on all the pooled local feature layers so as to predict whether each local feature layer contains an indicator light; and
and controlling the inspection robot in real time based on the trained model to determine whether the state of the equipment in the house is abnormal.
2. The device fault alerting method of claim 1, wherein the algorithm parameters comprise: the device end U number parameter, the device start U number parameter, the x-axis end pixel parameter, the y-axis end pixel parameter, the x-axis start pixel parameter and the y-axis start pixel parameter.
3. The equipment fault alerting method of claim 1, wherein the pan/tilt/lift detection point setting comprises: and setting the tripod head or lifting rod detection point, wherein the tripod head detection point comprises a video camera and thermal imaging, and the lifting rod detection point comprises a camera.
4. The device malfunction alerting method of claim 3, wherein the camera/camera parameter configuration includes setting parameters through a parameter configuration page, switching an exposure mode to manual and setting a shutter parameter value, and the larger the shutter parameter value, the longer the exposure time, the brighter a photographed picture.
5. The equipment fault alerting method of claim 2, wherein the component operating rule configuration comprises: and selecting the U number information of the equipment and the equipment indicator lamps on a parameter configuration page according to the real-time visual angle picture of the inspection robot, and designating the components to set the component operation rules.
6. The device fault alerting method of claim 5, wherein configuring the alerting rule comprises: and setting alarm index rules aiming at the setting component.
7. A system using the device fault alerting method of any one of claims 1-6, comprising:
a parameter configuration module for configuring parameters of the inspection robot, the parameters at least including: equipment pilot lamp U number information, cloud platform/lifter check point, camera parameter, part operation rule and warning rule, the parameter still includes at least: the method comprises the following steps of (1) detecting points of a cradle head/lifting rod, parameters of a camera, component operation rules and alarm rules;
the position of an analysis area in a machine room field picture is confirmed by configuring the parameters of the inspection robot;
the equipment indicator light U number information configuration comprises the following steps:
reserving allowance for the analysis area according to the machine room field picture to select U number information of the equipment indicator lamps;
acquiring algorithm parameters according to the position of the actual indicator light in the picture;
assigning values to the algorithm parameters, and confirming the position of the analysis area in the picture through the algorithm parameter values so as to position pixel position points in the picture;
the image acquisition module is used for driving the inspection robot to carry out on-site image data acquisition of a machine room based on the parameters;
the database is used for storing the acquired field image data of the plurality of machine rooms;
the preprocessing module is used for reading the collected field image data of the plurality of machine rooms from the database and preprocessing the field image data of the plurality of machine rooms;
a model training module for training a deep learning algorithm model, comprising:
the feature extraction module is used for extracting features of the preprocessed on-site image data of the plurality of machine rooms through a main feature extraction network to obtain a shared feature layer;
the pooling module is used for intercepting the local feature layer through the convolution and shared feature layer and then pooling the intercepted local feature layer in a subarea manner; and
the prediction module is used for carrying out classification prediction and regression prediction on all the pooled local feature layers so as to predict whether each local feature layer contains an indicator light; and
and the control module is used for controlling the inspection robot in real time based on the trained model so as to determine whether the state of the equipment in the house is abnormal.
CN202011572332.9A 2020-12-28 2020-12-28 Equipment fault warning method and system based on deep learning Active CN112287915B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011572332.9A CN112287915B (en) 2020-12-28 2020-12-28 Equipment fault warning method and system based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011572332.9A CN112287915B (en) 2020-12-28 2020-12-28 Equipment fault warning method and system based on deep learning

Publications (2)

Publication Number Publication Date
CN112287915A CN112287915A (en) 2021-01-29
CN112287915B true CN112287915B (en) 2021-04-16

Family

ID=74426312

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011572332.9A Active CN112287915B (en) 2020-12-28 2020-12-28 Equipment fault warning method and system based on deep learning

Country Status (1)

Country Link
CN (1) CN112287915B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113132689A (en) * 2021-04-20 2021-07-16 河南能创电子科技有限公司 Low-voltage centralized reading, operation and maintenance simulation device based on AI deep learning algorithm research
CN115809950B (en) * 2023-02-07 2023-05-19 烟台软图信息科技有限公司 Machine room operation and maintenance management platform and management method
CN116841301B (en) * 2023-09-01 2024-01-09 杭州义益钛迪信息技术有限公司 Inspection robot inspection model training method, device, equipment and medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871102A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN108189043A (en) * 2018-01-10 2018-06-22 北京飞鸿云际科技有限公司 A kind of method for inspecting and crusing robot system applied to high ferro computer room
CN111429511A (en) * 2020-04-02 2020-07-17 北京海益同展信息科技有限公司 Equipment position determining method, fault detection method, device and system in cabinet
CN112115927A (en) * 2020-11-19 2020-12-22 北京蒙帕信创科技有限公司 Intelligent machine room equipment identification method and system based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170249729A1 (en) * 2011-05-09 2017-08-31 Level 3 Inspection, Llc Automated optical metrology computer aided inspection station and method of operation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107871102A (en) * 2016-09-23 2018-04-03 北京眼神科技有限公司 A kind of method for detecting human face and device
CN108189043A (en) * 2018-01-10 2018-06-22 北京飞鸿云际科技有限公司 A kind of method for inspecting and crusing robot system applied to high ferro computer room
CN111429511A (en) * 2020-04-02 2020-07-17 北京海益同展信息科技有限公司 Equipment position determining method, fault detection method, device and system in cabinet
CN112115927A (en) * 2020-11-19 2020-12-22 北京蒙帕信创科技有限公司 Intelligent machine room equipment identification method and system based on deep learning

Also Published As

Publication number Publication date
CN112287915A (en) 2021-01-29

Similar Documents

Publication Publication Date Title
CN112287915B (en) Equipment fault warning method and system based on deep learning
US11380232B2 (en) Display screen quality detection method, apparatus, electronic device and storage medium
JP6921241B2 (en) Display screen quality inspection methods, equipment, electronic devices and storage media
KR102022496B1 (en) Process management and monitoring system using vision image detection and a method thereof
AU2019201977B2 (en) Aerial monitoring system and method for identifying and locating object features
US20210192714A1 (en) Automated machine vision-based defect detection
US20140146998A1 (en) Systems and methods to classify moving airplanes in airports
CN110751081B (en) Construction safety monitoring method and device based on machine vision
JP7391504B2 (en) Information processing device, information processing method and program
WO2020007119A1 (en) Display screen peripheral circuit detection method and device, electronic device and storage medium
JP7429756B2 (en) Image processing method, device, electronic device, storage medium and computer program
CN115331172A (en) Workshop dangerous behavior recognition alarm method and system based on monitoring video
CN113723300A (en) Artificial intelligence-based fire monitoring method and device and storage medium
CN116993970A (en) Oil and gas pipeline excavator occupation pressure detection method and system based on yolov5
CN109001210B (en) System and method for detecting aging and cracking of sealing rubber strip of civil air defense door
CN116385948B (en) System and method for early warning railway side slope abnormality
CN116363578A (en) Ship closed cabin personnel monitoring method and system based on vision
CN109658405B (en) Image data quality control method and system in crop live-action observation
JP7146416B2 (en) Information processing device, information processing system, information processing method, and program
JP2020177429A (en) Information processing apparatus, information processing method, and program
JP6943381B1 (en) Information processing equipment and programs
CN114241386A (en) Method for detecting and identifying hidden danger of power transmission line based on real-time video stream
CN113449588A (en) Smoke and fire detection method
US11605216B1 (en) Intelligent automated image clustering for quality assurance
US20230051823A1 (en) Systems, methods, and computer program products for image analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20211108

Address after: 200137 floor 1-5, building 14, No. 528, Yanggao North Road, Pudong New Area, Shanghai

Patentee after: Shanghai mengpa Information Technology Co.,Ltd.

Patentee after: Beijing mengpa Xinchuang Technology Co.,Ltd.

Address before: 1110, 1 / F, building a, 98 Guangqu Road, Chaoyang District, Beijing 100022

Patentee before: Beijing mengpa Xinchuang Technology Co.,Ltd.

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 200137 room 108, block a, building 8, No. 1879, jiangxinsha Road, Pudong New Area, Shanghai

Patentee after: Shanghai mengpa Intelligent Technology Co.,Ltd.

Patentee after: Beijing mengpa Xinchuang Technology Co., Ltd

Address before: 200137 floor 1-5, building 14, No. 528, Yanggao North Road, Pudong New Area, Shanghai

Patentee before: Shanghai mengpa Information Technology Co.,Ltd.

Patentee before: Beijing mengpa Xinchuang Technology Co., Ltd