CN113033469B - Tool damage identification method, device, equipment, system and readable storage medium - Google Patents

Tool damage identification method, device, equipment, system and readable storage medium Download PDF

Info

Publication number
CN113033469B
CN113033469B CN202110400192.5A CN202110400192A CN113033469B CN 113033469 B CN113033469 B CN 113033469B CN 202110400192 A CN202110400192 A CN 202110400192A CN 113033469 B CN113033469 B CN 113033469B
Authority
CN
China
Prior art keywords
tool
recognition
information
identification
damage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110400192.5A
Other languages
Chinese (zh)
Other versions
CN113033469A (en
Inventor
胡翔
王辉东
俞啸玲
郭强
高俊青
郑丽娟
陆斌
邵叶晨
缪宇峰
沈磊
魏佳栋
谢刘丹
倪小红
沈海萍
贾佩钦
赵一凡
黄娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Zhejiang Electric Power Co Ltd Hangzhou Yuhang District Power Supply Co
Hangzhou Dianzi University
Hangzhou Power Equipment Manufacturing Co Ltd
Hangzhou Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Original Assignee
State Grid Zhejiang Electric Power Co Ltd Hangzhou Yuhang District Power Supply Co
Hangzhou Dianzi University
Hangzhou Power Equipment Manufacturing Co Ltd
Hangzhou Power Supply Co of State Grid Zhejiang Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Zhejiang Electric Power Co Ltd Hangzhou Yuhang District Power Supply Co, Hangzhou Dianzi University, Hangzhou Power Equipment Manufacturing Co Ltd, Hangzhou Power Supply Co of State Grid Zhejiang Electric Power Co Ltd filed Critical State Grid Zhejiang Electric Power Co Ltd Hangzhou Yuhang District Power Supply Co
Priority to CN202110400192.5A priority Critical patent/CN113033469B/en
Publication of CN113033469A publication Critical patent/CN113033469A/en
Application granted granted Critical
Publication of CN113033469B publication Critical patent/CN113033469B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a damage identification method for tools and instruments, which collects video image information of different angles of the tools and instruments in a mode of surrounding the tools and instruments, avoids the limitation of single-angle static image information identification, combines multi-angle characteristic information and historical characteristic information to carry out damage identification, comprehensively analyzes the damage condition of the tools and instruments from external characteristics, internal characteristics and historical use characteristics at multiple angles, and can improve the accuracy of damage identification. The invention also discloses a device, equipment, a system and a readable storage medium for identifying the damage of the tools and the tools, which have corresponding technical effects.

Description

Tool damage identification method, device, equipment, system and readable storage medium
Technical Field
The present invention relates to the field of tool management technologies, and in particular, to a method, an apparatus, a device, a system, and a readable storage medium for identifying damage to a tool.
Background
In the use process of the tools and instruments, the tools and instruments are often damaged, broken, reduced in performance, incapable of meeting the use requirements and the like, so that the problems of personal injury, misoperation, production delay and the like caused by the damage and failure of the tools and instruments are avoided, and the tools and instruments are required to be damaged and inspected regularly or before use in order to strengthen the management of the tools and instruments.
In order to reduce the manual work load, images of tools are collected at present, and a convolutional neural network is called to conduct damage identification on the images, so that automatic damage identification of the tools is achieved. However, the detection accuracy of the mode is low at present, and the management of tools and instruments is affected.
In summary, how to improve the automatic identification accuracy of tool damage is a technical problem that needs to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide a method, a device, equipment and a system for identifying damage of tools and a readable storage medium, which can improve the automatic identification accuracy of the damage of the tools and tools.
In order to solve the technical problems, the invention provides the following technical scheme:
a tool damage identification method comprising:
acquiring a surrounding scanning video of a tool;
intercepting multi-angle scanning images from the surrounding scanning video;
inputting each multi-angle scanning image into each multi-branch parallel network in the recognition model to perform feature recognition to obtain each angle feature data;
dynamic multidimensional information combination is carried out on the angle characteristic data to be used as visual characteristic information;
invoking a comprehensive recognition network in the recognition model to perform comprehensive feature recognition on the visual feature information, the non-visual feature information of the tool and the historical feature information to obtain a feature recognition result;
And generating a damage identification result according to the characteristic identification result.
Optionally, the capturing the multi-angle image from the surrounding scan video includes:
determining the related parameters of the tools corresponding to the tools; wherein the tool related parameters comprise image acquisition quantity and multi-branch parallel network parameters; the parameters in the identification model comprise the tool-related parameters and tool-independent parameters;
uniformly intercepting images matched with the image acquisition quantity from the original acquired video to serve as the multi-angle scanning images;
correspondingly, each multi-angle scanning image is respectively input into each multi-branch parallel network in the recognition model to carry out dynamic multi-dimensional information combination feature recognition, and the method comprises the following steps:
loading the multi-branch parallel network parameters into a parallel network to adjust the parallel structure of the parallel network to obtain a matched parallel network;
and respectively inputting each multi-angle scanning image into a corresponding matching parallel network to perform dynamic multi-dimensional information combination feature recognition.
Optionally, after the uniformly capturing the images matching the image capturing number from the original captured video, the method further includes:
Dividing the multi-angle scanning image into a handheld part image, a connecting part image and a working part image according to the using structure of the tool;
correspondingly, the step of inputting each multi-angle scanning image to a corresponding matching parallel network to perform dynamic multi-dimensional information combination feature recognition includes: and respectively inputting the handheld part image, the connecting part image and the operation part image into a corresponding matching parallel network one by one to perform dynamic multidimensional information combination feature recognition.
Optionally, the generating a damage identification result according to the feature identification result includes:
taking the damage identification result as a current damage identification result, and extracting a historical damage identification result of the tool;
generating a damage change curve according to the current damage identification result and the historical damage identification result;
outputting the damage change curve.
Optionally, after the generating the damage identification result according to the feature identification result, the method further includes:
outputting the identification result user inquiry information;
receiving feedback information of the identification result;
and carrying out optimization training on the recognition model according to the recognition result feedback information.
A tool damage identification device comprising:
the video acquisition unit is used for acquiring a surrounding scanning video of the tool;
an image capturing unit for capturing a multi-angle scanning image from the surrounding scanning video;
the multi-angle identification unit is used for respectively inputting each multi-angle scanning image into each multi-branch parallel network in the identification model to carry out feature identification so as to obtain feature data of each angle;
the multidimensional information combining unit is used for carrying out dynamic multidimensional information combination on the angle characteristic data to serve as visual characteristic information;
the comprehensive feature recognition unit is used for calling a comprehensive recognition network in the recognition model to perform comprehensive feature recognition on the visual feature information, the non-visual feature information of the tool and the historical feature information to obtain a feature recognition result;
and the identification result generation unit is used for generating a damage identification result according to the characteristic identification result.
Optionally, the image capturing unit includes:
a parameter determining subunit, configured to determine a tool related parameter corresponding to the tool; wherein the tool related parameters comprise image acquisition quantity and multi-branch parallel network parameters; the parameters in the identification model comprise the tool-related parameters and tool-independent parameters;
The intercepting subunit is used for uniformly intercepting images matched with the image acquisition quantity from the original acquired video to serve as the multi-angle scanning images;
the multi-angle recognition unit accordingly includes:
a parameter loading subunit, configured to load the multi-branch parallel network parameter into a parallel network, so that a parallel structure of the parallel network is adjusted, and a matching parallel network is obtained;
and the matching recognition subunit is used for respectively inputting each multi-angle scanning image into a corresponding matching parallel network to perform dynamic multi-dimensional information combination feature recognition.
A tool damage identification device comprising:
a memory for storing a computer program;
and the processor is used for realizing the steps of the tool damage identification method when executing the computer program.
A tool damage identification system comprising:
the tool damage recognition apparatus as described above, and an image sensor connected to the tool damage recognition apparatus;
the image sensor is used for collecting images around tools and instruments and generating a surrounding scanning video.
A readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the tool damage identification method described above.
According to the method provided by the embodiment of the invention, the video image information of the tools and tools at different angles is collected in a mode of surrounding the scanning tools and tools, the limitation of single-angle static image information identification is avoided, the damage identification is carried out by combining multi-angle characteristic information and historical characteristic information, and the damage condition of the tools and tools is comprehensively analyzed from the external characteristics, the internal characteristics and the historical using characteristics at multiple angles, so that the accuracy of the damage identification can be improved.
Correspondingly, the embodiment of the invention also provides a tool damage identification device, equipment, a system and a readable storage medium corresponding to the tool damage identification method, which have the technical effects and are not repeated herein.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the related art, the drawings that are required to be used in the embodiments or the related technical descriptions will be briefly described, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to the drawings without inventive effort for those skilled in the art.
FIG. 1 is a flowchart of a method for identifying damage to tools according to an embodiment of the present invention;
FIG. 2 is a schematic diagram showing a calculation process of BV-softmax between vectors according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an identification model according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an image acquisition mode according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of a training process of the image acquisition number k according to an embodiment of the present invention;
FIG. 6 is a graph showing a correspondence between k values and input image viewing angles according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a result output according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of feedback training according to an embodiment of the present invention;
FIG. 9 is a schematic structural diagram of a damage identifying device for tools according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a tool damage identifying device according to an embodiment of the present invention.
Detailed Description
The core of the invention is to provide a method for identifying damage to tools and instruments, which can improve the automatic identification accuracy of damage to tools and instruments.
In order to better understand the aspects of the present invention, the present invention will be described in further detail with reference to the accompanying drawings and detailed description. It will be apparent that the described embodiments are only some, but not all, embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, fig. 1 is a flowchart of a method for identifying damage to a tool according to an embodiment of the invention, the method includes the following steps:
s101, acquiring a surrounding scanning video of a tool;
and acquiring surrounding images of the tools to be identified, and acquiring images of all angles of the tools so as to increase the damage identification precision of the tools by carrying out multi-angle identification on the tools. The process of collection can be, for example, by installing an image sensor in the intelligent safety cabinet, as shown in fig. 1, collecting video image data of the tools around the tools, and obtaining videos obtained by carrying out surrounding scanning on the tools.
In this embodiment, the specific acquisition implementation process of the surrounding scan video and the format and image size of the acquired video are not limited, for example, image data with a fixed size can be acquired after or before each tool is used, for example, the tool video image data is acquired around the tool, and the image size of each frame is adjusted to (227×681×3), that is, x= { X <1> ,X <2> ,……,X <t> },X <i> For the video image data of a tool after the i-th use, the embodiment will be described by taking only such an image acquisition mode as an example, and other acquisition modes can refer to the description of the embodiment, and will not be described herein.
S102, intercepting multi-angle scanning images from surrounding scanning videos;
to obtain video image data X of a tool after the ith use <i> For exampleX is then <i> ={X <i>(1) ,X <i>(2) ,……,X <i>(k) And k frames (several angles) of image data taken in the video image. In this embodiment, the angle (or the number) of the scanned image intercepted by each tool is not limited, and can be set according to the actual use requirement. Because the damage degree and the damage angle may be different when different tools are used, in order to strengthen the multi-angle damage recognition to different types of tools, the recognition process is more targeted, the recognition accuracy is improved, the limitation of corresponding dynamic parameters, namely the image acquisition angles (the image acquisition quantity), can be set for different tools (or types), for example, three images of the front side, the right side and the back side are respectively acquired by a spanner, and the acquired images of 5 angles are respectively acquired by a spade (each 360/5 degrees is acquired by a picture). The process of capturing the multi-angle image from the surround scan video may specifically include the steps of:
(1) Determining the related parameters of the tools and the instruments corresponding to the tools and the instruments;
wherein the tool-related parameters include the number of image acquisitions and multi-branch parallel network parameters. The recognition model comprises a multi-branch parallel network part and a comprehensive recognition network, wherein the multi-branch parallel network part comprises k parallel branch networks with the same network structure and parameters, the structures and types among the plurality of branch networks are the same, multi-angle scanning images which are respectively the interception of video image data of tools are input, the plurality of branches are respectively used for receiving the scanning images of a plurality of acquisition angles, for example, when the scanning images of three angles are acquired, the parallel networks of at least three branches are required to be set, the scanning image of the angle 1 is input into a first branch network, the scanning image of the angle 2 is input into a second branch network, the scanning image of the angle 3 is input into a third branch network, and each branch network can generate characteristic data of corresponding image angles as angle characteristic data. Each branch network in the multi-branch parallel network part is connected to a comprehensive identification network, the comprehensive identification network is a one-way network and is used for comprehensively carrying out overall identification on various information (including identification results of each branch network), identifying the dynamic multi-dimensional information, outputting characteristic identification information, and the characteristic identification result output by the identification model is the output result of the comprehensive identification network.
The parameters in the identification model comprise tool-related parameters and tool-independent parameters, wherein the tool-related parameters refer to parameters in the identification model related to the tool, such as multi-branch parallel network parameters (such as the number of branches and the structure parameters in each branch network) belong to one tool-related parameter; the tool-independent parameters are parameters in the tool-independent identification model, i.e. the identification model parameters of which the values are unchanged regardless of the tool. Parameters in the recognition model can be divided into related parameters of the tools and irrelevant parameters of the tools according to the degree of correlation of the tools, and the types and the quantity of the parameters of the corresponding recognition models under different types and structures are different, so that the types and the structures of the recognition models are not limited, and specific parameter types included in the related parameters of the tools and the irrelevant parameters of the tools are not limited, and are not repeated herein.
The specific values of the tool-related parameters corresponding to each tool may be determined during the network training process, which is not limited herein. And (3) configuring the relevant parameters of the tools corresponding to each tool before testing, dynamically selecting different parameters k and other parameters corresponding to the different types of tools in the model application stage, and directly calling the relevant parameters of the tools corresponding to the current tools after determining the tools to be identified in the test.
(2) Uniformly intercepting images matched with the image acquisition quantity from the original acquired video to serve as multi-angle scanning images;
in this embodiment, taking uniform interception as an example, uniform interception refers to that the image acquisition interval angles are consistent, and the product of the image acquisition quantity and the interval angles is 360 degrees. Taking the image acquisition number of 5 as an example, the initial acquisition image can be taken as the first image, and one image can be acquired every 360/5 degrees. In addition to uniform interception, other image acquisition modes, such as designated angle acquisition or random acquisition, may be adopted, and the specific implementation manner of image acquisition is not limited in this embodiment, and other acquisition modes may refer to the description of this embodiment and are not described herein.
Correspondingly, the step S103 of inputting each multi-angle scanning image into each multi-branch parallel network in the recognition model to perform dynamic multi-dimensional information combination feature recognition specifically may include:
(1) Loading the multi-branch parallel network parameters into a parallel network to adjust the parallel structure of the parallel network so as to obtain a matched parallel network;
(2) And respectively inputting each multi-angle scanning image into a corresponding matching parallel network to perform dynamic multi-dimensional information combination feature recognition.
S103, inputting each multi-angle scanning image into each multi-branch parallel network in the recognition model to perform feature recognition, and obtaining each angle feature data;
the recognition model is a neural network with a dynamic multidimensional parallel structure, which is built in the application, in this embodiment, each multi-branch parallel network in the recognition model is called to perform dynamic multidimensional information feature recognition on each multi-angle scanning image, and it should be noted that specific network structures of the multi-branch parallel network and the comprehensive recognition network in the recognition model are not limited in this embodiment, and the corresponding model structure can be configured according to actual use needs, which is not repeated here.
And respectively carrying out feature recognition on the input images through the parallel networks of all branches to obtain corresponding angle feature data.
S104, carrying out dynamic multidimensional information combination on the angle characteristic data to serve as visual characteristic information;
after the angle characteristic data corresponding to each angle image is obtained, a plurality of angle characteristic data are subjected to information fusion in a dynamic multidimensional information combination mode, so that comprehensive characteristic data are obtained and are used as visual characteristic information (namely, characteristic information obtained from the identification of the appearance characteristics of the tool). The dynamic multidimensional information combination can also be implemented in a network layer, and the network structure for the dynamic multidimensional information combination is not limited in this embodiment.
The BV_ Softmax (Between Vectors Softmax) dynamic multidimensional information combination mode provided by the method is different from the general Softmax operation in that the BV_Softmax calculates among elements in the vector, the BV_Softmax utilizes the characteristic that the sum of the elements is 1, and the operation process of the Softmax is applied to input among vectors (Between vectors), so that the purpose of fusing the input characteristic information is better; the calculation process is schematically shown in fig. 2, and the characteristics between the output vectors are as follows:
the BV-Softmax (Between Vectors Softmax) dynamic multidimensional information combination mode can better fuse all input characteristic information, so that the recognition accuracy is improved.
S105, invoking a comprehensive recognition network in the recognition model to perform comprehensive feature recognition on the visual feature information, the non-visual feature information of the tool and the historical feature information, so as to obtain a feature recognition result;
the comprehensive identification network can specifically comprise a dynamic multidimensional information combination network, a non-visual combination network and a historical combination network, wherein the non-visual combination network is used for combining visual characteristic information and non-visual characteristic information to perform characteristic identification.
The non-visual feature information refers to feature information that cannot be obtained from direct observation, and in this embodiment, the information items specifically included in the non-visual feature information are not limited, and may include data such as a job type, a job duration, and a maintenance number. The non-visual characteristic information may be given by an experienced tool maintenance team. In order to solve the problem that the image information only focuses on the external features of the tools and ignores the limitations of other feature information of the tools when collecting data, it is proposed in the present embodiment (given by a tool maintenance team with rich experience) to collect other feature information of the tools (taking the type of operation, duration of the operation, number of repairs, etc. as an example). In the training/application process of the damage identification model, image information and other collected information are combined to improve the accuracy of damage identification.
The historical characteristic information refers to a recorded characteristic recognition result obtained through recognition before, and when data is collected, besides collecting current information, in the embodiment, in order to solve the problem that the convolutional neural network only pays attention to the current information of the tools and the tools, the limitation of the historical information of the tools and the tools is ignored, the historical information of the tools and the tools is saved, and the current information and the historical information are combined for damage recognition so as to improve the accuracy of damage recognition.
S106, generating a damage identification result according to the feature identification result.
After the feature recognition result output by the recognition model is obtained, a damage recognition result is generated according to the feature recognition result, the feature recognition result can be directly output as the damage recognition result, in order to improve readability and user experience, the damage recognition result can be generated after the feature recognition result is processed, the limitation is not made in the embodiment, and the corresponding generation mode can be set according to the actual data output requirement, which is not repeated here.
Based on the description, the technical scheme provided by the embodiment of the invention collects video image information of different angles of the tools by surrounding the scanning tools, avoids the limitation of single-angle static image information identification, combines multi-angle characteristic information and historical characteristic information to perform damage identification, comprehensively analyzes the damage condition of the tools from external characteristics, internal characteristics and historical use characteristics at multiple angles, and can improve the accuracy of damage identification.
It should be noted that, in the above embodiment, the specific network structure of the multi-branch parallel network and the comprehensive recognition network in the recognition model is not limited, and for the sake of deepening understanding, a multi-branch parallel network structure composed of a CNN module, an FC-1 module, and a Combine-1 module, a dynamic multi-dimensional information combination network in the comprehensive recognition network composed of Combine-2, a non-visual combination network in the comprehensive recognition network composed of FC-2 modules, and a history combination network in the comprehensive recognition network composed of Combine-3 are described herein, and as shown in fig. 3, other network structures can refer to the following description.
Wherein, (1) CNN module: inputting a plurality of frame images which are taken from a video image of a tool, and dividing the frame images into an upper part, a middle part and a lower part along a short side, wherein the size of the input image is (227 multiplied by 3); the first convolution layer convolves with 48 (11 x 11) filters, stride 4, followed by a BN layer (batch normalization), followed by a max-pool layer of filter size (3 x 3) and stride 2, with an output size (27 x 48); the second convolution layer uses 24 (5×5) filters to perform the same convolution, followed by a BN layer (batch normalization), followed by a max-pooling layer with filter size (3×3) and stride 2, at which time the output size is (13×13×24); the third convolution layer uses 48 (3×3) filters to perform the same convolution, followed by a BN layer (batch normalization), followed by a max-pooling layer with filter size (3×3) and stride of 2, at which point the output size is (6×6×48); the CNN modules all use a ReLu activation function;
(2) FC-1 module: the first layer is a full-connection layer, the number of output units is 768, and the activation function is a ReLu activation function; the second layer is a full-connection layer, the number of output units is 384, and the activation function is a ReLu activation function;
(3) Combine-1 module: the input of the module comprises two parts, the output a of the FC-1 module <t>(j) Output c of former parallel network Combine-1 module <t>(j-1) (c when j=1) <t>(k) ) The output of Combine-1 module is c <t>(j) The method comprises the steps of carrying out a first treatment on the surface of the The calculation process of the module is as follows:
r c =σ(W rc c <t>(j-1) +W ra a <t>(j) +b 1 )
c <t>(j) =tanh(W cc (r c *c <t>(j-1) )+W ca a <t>(j) +b c )
(4) Combine-2: the input of the module is c <t>(1) ~c <t>(k) K inputs of equal dynamic adjustment, wherein each input unit number is 384; the output is d <t> The number of output units is 384, and the calculation process of the module is as follows:
r 1 =σ(W 1c c <t>(1) +b 1 )
r 2 =σ(W 2c c <t>(2) +b 2 )
……
r k =σ(W 3c c <t>(k) +b 3 )
d <t> =r 1 *c <t>(1) +r 2 *c <t>(2) +……+r k *c <t>(k)
(5) FC-2 module: the module is a full connection layer, and the input is Combine-2 output d <t> And other information features S <t> The method comprises the steps of carrying out a first treatment on the surface of the Output is e <t> The unit number is 192, and the activation function is a ReLu activation function;
(6) Combine-3: the input of the module is e <t> And history characteristic information h <t-1> Wherein each input unit number is 192; the output isThe number of output units is 192; the output of the module is the output of the network model; the calculation process of the module is as follows:
r h =σ(W rh1 h <t-1> +W re1 e <t> +b 1 )
r u =σ(W rh2 h <t-1> +W re2 e <t> +b 1 )
r f =σ(W rh3 h <t-1> +W re3 e <t> +b 1 )
under the network structure, the characteristics extracted from different inputs of the Combine-1 module are mutually influenced; the fusion part fuses the image features extracted by the parallel part with other information features and historical features, wherein a Combine-2 and Combine-3 module designs an identification network structure into two parallel structures as a whole, firstly, the combination of image information of different angles of a tool is realized, secondly, the combination of different time historical information is realized, the aim is to make up the defect that a single-angle static image is directly classified by using a convolutional neural network, and finally, a network model identifies dynamic multidimensional information and outputs feature identification information.
In addition, the training process of the recognition model is not limited in this embodiment, and the training process of the related model may be referred to, and when the parameters in the recognition model are divided into the tool related parameters and the tool unrelated parameters, the image acquisition number k is introduced as a basis for dynamically adjusting the input image data amount.
1. Tool data is collected.
(1) Collecting image data: in a certain use period, by installing an image sensor in the intelligent safety cabinet, as shown in a schematic diagram 4, video image data of the tools are collected around the tools, and the size of each frame of image is adjusted to be (227×681×3), namely, x= { X <1> ,X <2> ,……,X <t> },X <i> Video image data of a certain tool after the ith use is finished; wherein X is <i> ={X <i>(1) ,X <i>(2) ,……,X <i>(k) The k frames of image data intercepted in the video image are different in value corresponding to different tools, and the specific value is determined in the network training process; then the picture is trisected by taking the short side as the side length, and the size of the single image after segmentation is (227 multiplied by 3), namely X <i>(j) ={X <i>(j)[1] ,X <i>(j)[2] ,X <i>(j)[3] And respectively representing the upper part, the middle part and the lower part of an image of a certain tool at a certain moment, wherein the method respectively carries out damage identification on the three parts of the tool.
(2) Collecting non-visual characteristic information: the maintenance team of the tool with abundant experience gives comments, other characteristic information of the tool is collected, and other data information is taken as an example of data such as the type of operation, the duration of operation, the maintenance times and the like, namely S= { S <1> ,S <2> ,……,S <t> },S <i> The non-visual characteristic information of a certain tool after the ith use is finished.
(3) Marking data: the organiser marks the collected part of the tool data (X <i>[m] ,Y <i>[m] ) Wherein X is a tool sample, Y is a marker of damage to the sample,<>the bracket represents the image acquired after the first use of the tool, () the bracket distinguishes between multi-angle images, []This bracket distinguishes the upper, middle and lower three parts of the image. Y is a damaged flag, there are a plurality of damages, so the damaged flag also has a plurality (a position represents a damage, the damage degree is distinguished by the numerical value of the position) such as<i>[m]Representing the image collected after the ith use, m represents the three parts of the image (so the m value is 1 or 2 or 3) Y= { Y 0 ,y 1 ,……y n-1 },y i Representing n damage types of tools respectively, marking according to { no damage: 0, light damage: 0.4, medium damage: 0.8, severe damage: 1} are marked.
2. The objective function of the conventional training stage, such as recognition model, is selected as the sum of square errors, the initial value of k takes a default value or a certain specified value (such as 4), and other non-mentioned parameters adopt common default parameters. 80% of the previously collected data are used as training sets and the remaining data are used as validation sets. And stopping training when the accuracy of the verification set is not reduced by 0.01 for 5 continuous iterations, and ending the conventional training phase of the model.
3. A schematic diagram of a training process of the image acquisition number k is shown in fig. 5; after the conventional training phase is finished, k incompatibilities are savedThe parameters of the closed network model are set as untrainable; then, distinguishing the tool data according to the class; the value of k is in the range of {4,2×4, … …,2 i X 4}, wherein k is the image data corresponding to different visual angles of the tool when the k takes different values, and the corresponding relation between the value of k and the visual angle of the input image is shown in fig. 6; sequentially selecting a parameter k, dynamically adjusting parallel parts of a network model, and training the k related parameters (namely Combine-2 module parameters) in the network by using a current tool training set, wherein the training process is the same as that of a conventional training stage; after training, counting the identification accuracy of the current class of tools and instruments in the current k related parameters; when the parameter k is selected for the first time, selecting k=4, sequentially selecting values in a k value range, and when the accuracy of the verification set is continuously iterated for 5 times without 0.005 drop, selecting the smallest k value in 5 times, and storing the k value and the k related parameter corresponding to the current class; judging whether to traverse the tool class, stopping training if the traversing is completed, selecting the next tool class if the traversing is not completed, initializing the corresponding k value, and repeating the training step of selecting the parameter k.
And after the model preliminary training is finished, putting the model into practical application. After the recognition model collects the video image data of the tools and instruments, dynamically adjusting the network parallel structure part according to the image collection number k corresponding to the classes of the tools and instruments, and loading k related parameters (namely Combine-2 module parameters) corresponding to the current tools and instruments; the recognition model recognizes damage of the tool according to current image input, other information input, historical information input and the like, and outputs a recognition result. The output medium is a display device of the intelligent safety cabinet, and the output prediction result can include: the type of damage and the degree of damage to each part of the tool.
On the basis of the above embodiment, after images matching the image acquisition number are uniformly intercepted from the original acquired video, in order to further improve the recognition accuracy, the following steps may be further performed:
dividing the multi-angle scanning image into a handheld part image, a connecting part image and a working part image according to the using structure of the tool;
considering that the general tools and instruments can be structurally divided into a handheld part, a connecting part and a working part, the damage of each part should be of different types, and the three parts are divided for identification, so that an identification network can be more targeted, and the aim of improving the identification accuracy is fulfilled. Of course, the different tools may be divided into two parts, four parts, or the like, or the image may not be divided and identified, and the present invention is not limited thereto.
For the sake of deepening understanding, a three-part image segmentation process is described herein, the method assumes that the aspect ratio of the extracted image is 3:1, herein refers to trisecting the long side, the segmented image is 3 square images, the side length is consistent with the short side length, the image is trisected by taking the short side as the side length, and the size of the segmented single image is (227×227×3), namely X <i>(j) ={X <i>(j)[1] ,X <i>(j)[2] ,X <i>(j)[3] The process of respectively inputting each multi-angle scanning image into the corresponding matching parallel network to perform dynamic multi-dimensional information combination feature recognition is specifically as follows: and respectively inputting the handheld part image, the connecting part image and the operation part image into the corresponding matching parallel network one by one to perform dynamic multidimensional information combination feature recognition. For example, the multi-angle image of the hand-held part is recognized at time T1, the multi-angle image of the connecting part is recognized at time T2, and the multi-angle image of the work part is recognized at time T3.
In addition, in the above embodiment, the specific implementation manner of generating the damage identification result according to the feature identification result is not limited, and in order to intuitively output the change of the damage degree of the tool in continuous use, the process of generating the damage identification result according to the feature identification result may specifically include the following steps:
(1) Taking the damage identification result as a current damage identification result, and extracting a historical damage identification result of the tool;
(2) Generating a damage change curve according to the current damage identification result and the historical damage identification result;
(3) And outputting a damage change curve.
Taking damage recognition results including damage types and damage degrees of each part of the tool as an example, outputting a plane curve with the use times as an abscissa and the damage degrees as an ordinate, and outputting an output curve of each image of the same tool in the same coordinate system. A result output schematic diagram is shown in fig. 7, wherein the use number is approximately 10 times of use at the end time of the t-th use; the output damage type is the damage type corresponding to the maximum unit output by the network model after the t-th use;when->The degree of damage is shown as no damage when +.>The degree of damage is shown as slight damage when +.>The degree of damage is shown as moderate damage when +.>The extent of damage is shown as heavy damage.
Further, in order to ensure that the recognition model is always in an optimal recognition accuracy state, after generating a damaged recognition result according to the feature recognition result, the following steps may be further performed:
(1) Outputting the identification result user inquiry information;
(2) Receiving feedback information of the identification result;
(3) And carrying out optimization training on the recognition model according to the recognition result feedback information.
In the using stage, the user takes out the tools and instruments each time, and the intelligent cabinet display device displays the damage identification result of the model to the tools and instruments in the form of an image; to obtain user feedback data at a lower cost, user feedback may be solicited at image display corruption results; the user can quickly judge the identification result according to experience, the identification is correct as 'v', and the identification is wrong as 'x';
if the user forward feedback (namely feedback 'v') is successfully obtained, saving the sample and the identification data; if the reverse feedback (namely feedback 'X') of the user is obtained, saving sample data, and re-marking by maintenance personnel to be used as training data; if the user does not give feedback, the predicted data is not specially processed, and a feedback training diagram is shown in fig. 8.
The user judges the damage result to obtain feedback data, so that the time cost of the user feedback data is reduced, and a large amount of accurate label data is obtained during the frequent use of tools; every 1000 feedback data are obtained, the model (tool-independent parameters) is continuously trained according to the 1000 feedback data, and the model training mode can be the same as the conventional training stage process described in the embodiment;
After the model stops the primary feedback training, the intelligent safety cabinet system can continue to collect user feedback data, and the feedback training conditions can be set as follows: and if the statistical accuracy is lower than the accuracy of the last training, repeating the feedback training step every 1000 feedback data. It should be noted that when the model is applied for the first time, feedback training is directly performed without satisfying the re-feedback training condition.
According to the method, when the identification result is displayed to the maintenance/user, feedback information is collected, and the model is retrained according to the feedback information, so that the problem that the model training is not in an optimal state due to insufficient marking data amount can be solved, and the identification accuracy of the identification model is ensured.
Corresponding to the above method embodiments, the embodiments of the present invention further provide a tool damage identifying device, where the tool damage identifying device described below and the tool damage identifying method described above may be referred to correspondingly.
Referring to fig. 9, the apparatus includes the following modules:
the video acquisition unit 110 is mainly used for acquiring surrounding scanning video of the tools;
the image capturing unit 120 is mainly used for capturing a multi-angle scanning image from the surrounding scanning video;
The multi-angle recognition unit 130 is mainly used for inputting each multi-angle scanning image into each multi-branch parallel network in the recognition model for feature recognition, so as to obtain feature data of each angle;
the multidimensional information combining unit 140 is mainly used for dynamically and multidimensional information combination of the angle characteristic data as visual characteristic information;
the comprehensive feature recognition unit 150 is mainly used for calling a comprehensive recognition network in the recognition model to perform comprehensive feature recognition on the visual feature information, the non-visual feature information of the tool and the historical feature information, so as to obtain a feature recognition result;
the recognition result generation unit 160 is mainly used for generating a damage recognition result according to the feature recognition result.
In one embodiment of the present invention, the image capturing unit 120 specifically includes:
a parameter determining subunit, configured to determine a tool-related parameter corresponding to the tool; the tool related parameters comprise image acquisition quantity and multi-branch parallel network parameters; identifying parameters in the model including tool-related parameters and tool-independent parameters;
the intercepting subunit is used for uniformly intercepting images matched with the image acquisition quantity from the original acquired video to serve as multi-angle scanning images;
Accordingly, the multi-angle recognition unit 130 specifically includes:
the parameter loading subunit is used for loading the multi-branch parallel network parameters into the parallel network so as to adjust the parallel structure of the parallel network and obtain a matched parallel network;
and the matching recognition subunit is used for respectively inputting each multi-angle scanning image into a corresponding matching parallel network to perform dynamic multi-dimensional information combination feature recognition.
Corresponding to the above method embodiments, the embodiments of the present invention further provide a tool damage identifying apparatus, and a tool damage identifying apparatus described below and a tool damage identifying method described above may be referred to correspondingly with each other.
The tool damage identification device includes:
a memory for storing a computer program;
and the processor is used for realizing the steps of the tool damage identification method of the method embodiment when executing the computer program.
Specifically, referring to fig. 10, a schematic diagram of a specific structure of a tool damage identifying device according to the present embodiment, where the tool damage identifying device may have a relatively large difference due to different configurations or performances, may include one or more processors (central processing units, CPU) 322 (e.g., one or more processors) and a memory 332, where the memory 332 stores one or more computer applications 342 or data 344. Wherein the memory 332 may be transient storage or persistent storage. The program stored in memory 332 may include one or more modules (not shown), each of which may include a series of instruction operations in the data processing apparatus. Still further, the central processor 322 may be configured to communicate with the memory 332 to perform a series of instruction operations in the memory 332 on the fixture damage identification device 301.
The tool damage identification device 301 may also include one or more power supplies 326, one or more wired or wireless network interfaces 350, one or more input/output interfaces 358, and/or one or more operating systems 341.
The steps in the tool damage identification method described above may be implemented by the structure of the tool damage identification device.
Corresponding to the above apparatus embodiments, the present invention further provides a tool damage recognition system, and a tool damage recognition system described below and a tool damage recognition apparatus described above may be referred to correspondingly to each other.
A tool damage identification system comprising: a tool damage recognition device and an image sensor connected to the tool damage recognition device;
the image sensor is used for collecting images around the tools and generating a surrounding scanning video.
An image or data output device, such as an electronic screen, may also be provided outside the image sensor for outputting the damage identification result.
For example, an intelligent safety cabinet can be arranged, an image sensor, a tool damage identification device and a tool placement position are arranged in the cabinet, an electronic screen is arranged on the surface of the cabinet and used for outputting damage identification results, and a user can conveniently acquire the damage identification results when taking the identified tools. If the electronic screen is set as the touch screen, the correctness feedback of the damage recognition result of the user can be further received, so that the recognition model in the tool damage recognition device can be conveniently subjected to feedback training according to the feedback information, and the method is not limited.
Corresponding to the above method embodiments, the embodiments of the present invention further provide a readable storage medium, where a readable storage medium described below and a tool damage identification method described above may be referred to correspondingly.
A readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the tool damage identification method of the above method embodiment.
The readable storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, and the like.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, but such implementation is not intended to be limiting.

Claims (8)

1. A tool damage identification method, comprising:
acquiring a surrounding scanning video of a tool;
intercepting multi-angle scanning images from the surrounding scanning video;
inputting each multi-angle scanning image into each multi-branch parallel network in the recognition model to perform feature recognition to obtain each angle feature data;
carrying out dynamic multidimensional information combination on the angle characteristic data, and carrying out information fusion on a plurality of angle characteristic data in a dynamic multidimensional information combination mode to obtain comprehensive characteristic data which is used as visual characteristic information, namely characteristic information obtained from the identification of the appearance characteristics of the tool;
invoking a comprehensive recognition network in the recognition model to perform comprehensive feature recognition on the visual feature information, the non-visual feature information of the tool and the historical feature information to obtain a feature recognition result; the comprehensive identification network comprises a dynamic multidimensional information combination network, a non-visual combination network and a historical combination network, wherein the non-visual combination network is used for combining visual characteristic information and non-visual characteristic information to perform characteristic identification; the invisible feature information refers to feature information which cannot be directly observed; the historical characteristic information refers to a recorded characteristic recognition result obtained through recognition before;
Generating a damage identification result according to the characteristic identification result;
the capturing multi-angle scanned images from the surround scan video includes:
determining the related parameters of the tools corresponding to the tools; wherein the tool related parameters comprise image acquisition quantity and multi-branch parallel network parameters; the parameters in the identification model comprise the tool-related parameters and tool-independent parameters;
uniformly intercepting images matched with the image acquisition quantity from the original acquired video to serve as the multi-angle scanning images;
correspondingly, each multi-angle scanning image is respectively input into each multi-branch parallel network in the recognition model to carry out dynamic multi-dimensional information combination feature recognition, and the method comprises the following steps:
loading the multi-branch parallel network parameters into a parallel network to adjust the parallel structure of the parallel network to obtain a matched parallel network;
and respectively inputting each multi-angle scanning image into a corresponding matching parallel network to perform dynamic multi-dimensional information combination feature recognition.
2. The tool damage identification method of claim 1, further comprising, after said uniformly capturing images matching said image capture number from said original captured video:
Dividing the multi-angle scanning image into a handheld part image, a connecting part image and a working part image according to the using structure of the tool;
correspondingly, the step of inputting each multi-angle scanning image to a corresponding matching parallel network to perform dynamic multi-dimensional information combination feature recognition includes: and respectively inputting the handheld part image, the connecting part image and the operation part image into a corresponding matching parallel network one by one to perform dynamic multidimensional information combination feature recognition.
3. The tool damage identification method of claim 1, wherein the generating damage identification results from the feature identification results comprises:
taking the damage identification result as a current damage identification result, and extracting a historical damage identification result of the tool;
generating a damage change curve according to the current damage identification result and the historical damage identification result;
outputting the damage change curve.
4. The tool damage recognition method according to claim 1, further comprising, after the generating of the damage recognition result from the feature recognition result:
Outputting the identification result user inquiry information;
receiving feedback information of the identification result;
and carrying out optimization training on the recognition model according to the recognition result feedback information.
5. A tool damage recognition device, comprising:
the video acquisition unit is used for acquiring a surrounding scanning video of the tool;
an image capturing unit for capturing a multi-angle scanning image from the surrounding scanning video;
the multi-angle identification unit is used for respectively inputting each multi-angle scanning image into each multi-branch parallel network in the identification model to carry out feature identification so as to obtain feature data of each angle;
the multidimensional information combination unit is used for carrying out dynamic multidimensional information combination on the angle characteristic data, carrying out information fusion on a plurality of angle characteristic data in a dynamic multidimensional information combination mode to obtain comprehensive characteristic data which is used as visual characteristic information, namely characteristic information obtained from the identification of the appearance characteristics of the tools;
the comprehensive feature recognition unit is used for calling a comprehensive recognition network in the recognition model to perform comprehensive feature recognition on the visual feature information, the non-visual feature information of the tool and the historical feature information to obtain a feature recognition result; the comprehensive identification network comprises a dynamic multidimensional information combination network, a non-visual combination network and a historical combination network, wherein the non-visual combination network is used for combining visual characteristic information and non-visual characteristic information to perform characteristic identification; the invisible feature information refers to feature information which cannot be directly observed; the historical characteristic information refers to a recorded characteristic recognition result obtained through recognition before;
The identification result generation unit is used for generating a damage identification result according to the characteristic identification result;
the image capturing unit includes:
a parameter determining subunit, configured to determine a tool related parameter corresponding to the tool; wherein the tool related parameters comprise image acquisition quantity and multi-branch parallel network parameters; the parameters in the identification model comprise the tool-related parameters and tool-independent parameters;
the intercepting subunit is used for uniformly intercepting images matched with the image acquisition quantity from the original acquired video to serve as the multi-angle scanning images;
the multi-angle recognition unit accordingly includes:
a parameter loading subunit, configured to load the multi-branch parallel network parameter into a parallel network, so that a parallel structure of the parallel network is adjusted, and a matching parallel network is obtained;
and the matching recognition subunit is used for respectively inputting each multi-angle scanning image into a corresponding matching parallel network to perform dynamic multi-dimensional information combination feature recognition.
6. A tool damage recognition apparatus, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the tool damage identification method of any one of claims 1 to 4 when executing the computer program.
7. A tool damage identification system, comprising:
the tool damage identification device of claim 6 and an image sensor connected to the tool damage identification device;
the image sensor is used for collecting images around tools and instruments and generating a surrounding scanning video.
8. A readable storage medium, characterized in that the readable storage medium has stored thereon a computer program which, when executed by a processor, implements the steps of the tool damage identification method of any one of claims 1 to 4.
CN202110400192.5A 2021-04-14 2021-04-14 Tool damage identification method, device, equipment, system and readable storage medium Active CN113033469B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110400192.5A CN113033469B (en) 2021-04-14 2021-04-14 Tool damage identification method, device, equipment, system and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110400192.5A CN113033469B (en) 2021-04-14 2021-04-14 Tool damage identification method, device, equipment, system and readable storage medium

Publications (2)

Publication Number Publication Date
CN113033469A CN113033469A (en) 2021-06-25
CN113033469B true CN113033469B (en) 2024-04-02

Family

ID=76456631

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110400192.5A Active CN113033469B (en) 2021-04-14 2021-04-14 Tool damage identification method, device, equipment, system and readable storage medium

Country Status (1)

Country Link
CN (1) CN113033469B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116663940B (en) * 2023-08-01 2023-10-20 安徽博诺思信息科技有限公司 Substation safety tool management system and management method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020239089A1 (en) * 2019-05-30 2020-12-03 深圳市聚蜂智能科技有限公司 Insurance loss assessment method and apparatus, and computer device and storage medium
CN112132137A (en) * 2020-09-16 2020-12-25 山西大学 FCN-SPP-Focal Net-based method for identifying correct direction of abstract picture image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921811B (en) * 2018-04-03 2020-06-30 阿里巴巴集团控股有限公司 Method and device for detecting damage of article and article damage detector

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020239089A1 (en) * 2019-05-30 2020-12-03 深圳市聚蜂智能科技有限公司 Insurance loss assessment method and apparatus, and computer device and storage medium
CN112132137A (en) * 2020-09-16 2020-12-25 山西大学 FCN-SPP-Focal Net-based method for identifying correct direction of abstract picture image

Also Published As

Publication number Publication date
CN113033469A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
WO2021004154A1 (en) Method for predicting remaining life of numerical control machine tool
KR102229594B1 (en) Display screen quality detection method, device, electronic device and storage medium
US10930037B2 (en) Image processing device for displaying object detected from input picture image
JP3637412B2 (en) Time-series data learning / prediction device
US11283991B2 (en) Method and system for tuning a camera image signal processor for computer vision tasks
CN113033469B (en) Tool damage identification method, device, equipment, system and readable storage medium
US20230066703A1 (en) Method for estimating structural vibration in real time
CN114897102A (en) Industrial robot fault diagnosis method, system, equipment and storage medium
CN115810133A (en) Welding control method based on image processing and point cloud processing and related equipment
CN111177495A (en) Method for intelligently identifying data content and generating corresponding industry report
CN113706455B (en) Rapid detection method for damage of 330kV cable porcelain insulator sleeve
US11928591B2 (en) Information processing apparatus and information processing method
CN111444075B (en) Method for automatically discovering key influence indexes
CN111797686B (en) Foam flotation production process operation state stability evaluation method based on time sequence similarity analysis
CN112183555B (en) Method and system for detecting welding quality, electronic device and storage medium
CN116503398B (en) Insulator pollution flashover detection method and device, electronic equipment and storage medium
CN117132802A (en) Method, device and storage medium for identifying field wheat diseases and insect pests
CN112487853A (en) Handwriting comparison method and system, electronic equipment and storage medium
JP2001043367A (en) Simulation system and method for appearance inspection
CN112730437B (en) Spinneret plate surface defect detection method and device based on depth separable convolutional neural network, storage medium and equipment
CN115935284A (en) Power grid abnormal voltage detection method, device, equipment and storage medium
Tao et al. Utilization of both machine vision and robotics technologies in assisting quality inspection and testing
CN111209888A (en) Human-computer interface visual recognition system and method
CN110796117B (en) Blood cell automatic analysis method, system, blood cell analyzer and storage medium
JPH0836510A (en) User interface evaluation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230410

Address after: 310018 No. 11 street, Hangzhou economic and Technological Development Zone, Hangzhou, Zhejiang 91

Applicant after: HANGZHOU ELECTRIC EQUIPMENT MANUFACTURING Co.,Ltd.

Applicant after: State Grid Zhejiang Electric Power Co., Ltd. Hangzhou Yuhang District Power Supply Co.

Applicant after: HANGZHOU POWER SUPPLY COMPANY, STATE GRID ZHEJIANG ELECTRIC POWER Co.,Ltd.

Applicant after: HANGZHOU DIANZI University

Address before: 310018 No. 11 street, Hangzhou economic and Technological Development Zone, Hangzhou, Zhejiang 91

Applicant before: HANGZHOU ELECTRIC EQUIPMENT MANUFACTURING Co.,Ltd.

Applicant before: STATE GRID ZHEJIANG HANGZHOU YUHANG POWER SUPPLY Co.

Applicant before: HANGZHOU POWER SUPPLY COMPANY, STATE GRID ZHEJIANG ELECTRIC POWER Co.,Ltd.

Applicant before: HANGZHOU DIANZI University

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant