CN110487787B - Component loss detection method and device, storage medium and terminal equipment - Google Patents

Component loss detection method and device, storage medium and terminal equipment Download PDF

Info

Publication number
CN110487787B
CN110487787B CN201910619351.3A CN201910619351A CN110487787B CN 110487787 B CN110487787 B CN 110487787B CN 201910619351 A CN201910619351 A CN 201910619351A CN 110487787 B CN110487787 B CN 110487787B
Authority
CN
China
Prior art keywords
component
state
sampling moment
current sampling
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910619351.3A
Other languages
Chinese (zh)
Other versions
CN110487787A (en
Inventor
孔庆杰
林姝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Elite Vision Technology Shandong Co ltd
Shandong Rizhao Power Generation Co Ltd
Original Assignee
Elite Vision Technology Shandong Co ltd
Shandong Rizhao Power Generation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Elite Vision Technology Shandong Co ltd, Shandong Rizhao Power Generation Co Ltd filed Critical Elite Vision Technology Shandong Co ltd
Priority to CN201910619351.3A priority Critical patent/CN110487787B/en
Publication of CN110487787A publication Critical patent/CN110487787A/en
Application granted granted Critical
Publication of CN110487787B publication Critical patent/CN110487787B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N15/00Investigating characteristics of particles; Investigating permeability, pore-volume, or surface-area of porous materials
    • G01N15/02Investigating particle size or size distribution
    • G01N15/0205Investigating particle size or size distribution by optical means, e.g. by light scattering, diffraction, holography or imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Abstract

The invention belongs to the field of industrial visual detection, and particularly relates to a component loss detection method and device, a computer-readable storage medium and terminal equipment. The method comprises the following steps: collecting an image of a component at the current sampling moment; carrying out feature coding on the image of the component at the current sampling moment to obtain the state feature of the component at the current sampling moment; performing state fusion between time sequence codes on the state characteristics of the component at the current sampling moment to obtain the fusion state characteristics of the component at the current sampling moment; acquiring a state mode of the component at the previous sampling moment, and performing mode matching on the fusion state characteristics of the component at the current sampling moment by using the state mode of the component at the previous sampling moment to obtain a mode matching result of the component at the current sampling moment; and determining a loss detection result of the component at the current sampling moment according to a mode matching result of the component at the current sampling moment.

Description

Component loss detection method and device, storage medium and terminal equipment
Technical Field
The invention belongs to the field of industrial visual detection, and particularly relates to a component loss detection method and device, a computer-readable storage medium and terminal equipment.
Background
Various industrial components forming the industrial equipment are continuously lost in the process of being used, and the loss of the components accumulated to a certain degree can cause adverse effects on the safe operation of the industrial equipment, so that the loss of the components needs to be regularly detected to ensure the safe operation of the industrial equipment. However, in the prior art, when component loss detection is performed, manual detection is mostly adopted, so that both detection accuracy and detection efficiency are low, and actual requirements are difficult to meet.
Disclosure of Invention
In view of this, embodiments of the present invention provide a device loss detection method, an apparatus, a computer-readable storage medium, and a terminal device, so as to solve the problem that the existing device loss detection method is low in detection accuracy and detection efficiency.
A first aspect of an embodiment of the present invention provides a device loss detection method, which may include:
collecting an image of a component at the current sampling moment;
carrying out feature coding on the image of the component at the current sampling moment to obtain the state feature of the component at the current sampling moment;
performing state fusion between time sequence codes on the state characteristics of the component at the current sampling moment to obtain the fusion state characteristics of the component at the current sampling moment;
acquiring a state mode of the component at the previous sampling moment, wherein the state mode of the component at the previous sampling moment is a state mode constructed according to the fusion state characteristics of the component at the previous sampling moment;
performing mode matching on the fusion state characteristics of the component at the current sampling moment by using the state mode of the component at the previous sampling moment to obtain a mode matching result of the component at the current sampling moment;
and determining a loss detection result of the component at the current sampling moment according to a mode matching result of the component at the current sampling moment.
Further, the state fusion between the time sequence codes is performed on the state characteristics of the component at the current sampling time, and obtaining the fusion state characteristics of the component at the current sampling time includes:
acquiring fusion state characteristics of the component at the previous sampling moment;
and carrying out fusion processing on the fusion state characteristics of the component at the previous sampling moment and the state characteristics of the component at the current sampling moment to obtain the fusion state characteristics of the component at the current sampling moment.
Further, before obtaining the state mode of the component at the previous sampling moment, the method further comprises:
performing convolution operation and downsampling operation on the fusion state characteristics of the component at the previous sampling moment by using a preset convolution neural network;
and determining the small-scale characteristic diagram output by the last layer of the convolutional neural network as the state mode of the component at the previous sampling moment.
Further, the performing pattern matching on the fusion state feature of the component at the current sampling time by using the state pattern of the component at the previous sampling time includes:
and using the state mode of the component at the previous sampling moment as a convolution kernel, performing convolution operation on the fusion state characteristic of the component at the current sampling moment, and using the result of the convolution operation as the mode matching result.
Further, after obtaining a pattern matching result of the component at the current sampling time, the component loss detection method may further include:
and predicting the loss detection result of the component at the next sampling moment according to the mode matching result of the component at the current sampling moment.
A second aspect of an embodiment of the present invention provides a device loss detection apparatus, which may include:
the image acquisition module is used for acquiring an image of the component at the current sampling moment;
the characteristic coding module is used for carrying out characteristic coding on the image of the component at the current sampling moment to obtain the state characteristic of the component at the current sampling moment;
the state fusion module is used for performing state fusion between time sequence codes on the state characteristics of the component at the current sampling moment to obtain the fusion state characteristics of the component at the current sampling moment;
the state mode acquisition module is used for acquiring a state mode of the component at the previous sampling moment, wherein the state mode of the component at the previous sampling moment is a state mode constructed according to the fusion state characteristics of the component at the previous sampling moment;
the mode matching module is used for performing mode matching on the fusion state characteristics of the component at the current sampling moment by using the state mode of the component at the previous sampling moment to obtain a mode matching result of the component at the current sampling moment;
and the detection module is used for determining the loss detection result of the component at the current sampling moment according to the mode matching result of the component at the current sampling moment.
Further, the state fusion module may include:
the fusion state characteristic acquisition unit is used for acquiring the fusion state characteristic of the component at the previous sampling moment;
and the fusion state feature calculation unit is used for performing fusion processing on the fusion state feature of the component at the previous sampling moment and the state feature of the component at the current sampling moment to obtain the fusion state feature of the component at the current sampling moment.
Further, the device loss detection apparatus may further include:
the network processing module is used for carrying out convolution operation and downsampling operation on the fusion state characteristics of the components at the previous sampling moment by using a preset convolution neural network;
and the state mode determining module is used for determining the small-scale characteristic diagram output by the last layer of the convolutional neural network as the state mode of the component at the previous sampling moment.
Further, the pattern matching module is specifically configured to use a state pattern of the component at a previous sampling time as a convolution kernel, perform convolution operation on the fusion state feature of the component at the current sampling time, and use a result of the convolution operation as the pattern matching result.
Further, the device loss detection apparatus may further include:
and the prediction module is used for predicting the loss detection result of the component at the next sampling moment according to the mode matching result of the component at the current sampling moment.
A third aspect of the embodiments of the present invention provides a computer-readable storage medium, where computer-readable instructions are stored, and when the computer-readable instructions are executed by a processor, the steps of any one of the above-mentioned component loss detection methods are implemented.
A fourth aspect of the embodiments of the present invention provides a terminal device, including a memory, a processor, and computer readable instructions stored in the memory and executable on the processor, where the processor implements any one of the above steps of the method for detecting loss of a component when executing the computer readable instructions.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: the embodiment of the invention collects the image of the component at the current sampling moment; carrying out feature coding on the image of the component at the current sampling moment to obtain the state feature of the component at the current sampling moment; performing state fusion between time sequence codes on the state characteristics of the component at the current sampling moment to obtain the fusion state characteristics of the component at the current sampling moment; acquiring a state mode of the component at the previous sampling moment, wherein the state mode of the component at the previous sampling moment is a state mode constructed according to the fusion state characteristics of the component at the previous sampling moment; and carrying out mode matching on the fusion state characteristics of the component at the current sampling moment by using the state mode of the component at the previous sampling moment to obtain the loss detection result of the component at the current sampling moment. In the whole process, the current state characteristics of the components are considered, the previous state characteristics are fused, the accuracy of the finally obtained loss detection result is greatly improved, and the detection efficiency is greatly improved through a full-automatic detection mode.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a flowchart of a component loss detection method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of the whole processing flow of a component loss detection method in the embodiment of the present invention;
fig. 3 is a schematic structural diagram of a device loss detection apparatus according to an embodiment of the present invention;
fig. 4 is a schematic block diagram of a terminal device in an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to a determination" or "in response to a detection". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Referring to fig. 1, an embodiment of a device loss detection method according to an embodiment of the present invention may include:
and S101, acquiring an image of the component at the current sampling moment.
In this embodiment, a plurality of sampling moments can be preset, and at each sampling moment, the component is subjected to one-time loss detection. These sampling instants may be equally spaced or unequally spaced. In this embodiment, the current sampling time is denoted as a sampling time t, a previous sampling time of the current sampling time is a sampling time t-1, a next sampling time of the current sampling time is a sampling time t +1, and so on.
In this embodiment, the image of the component at each sampling time may be acquired by a planar array camera, and the planar array camera preferably adopts a high-resolution planar array industrial camera with a global exposure mode, so as to avoid the influence of chromatic aberration. The zooming wide-angle lens in the plane array camera fully covers the components in the acquisition area, so that complete images of the components can be acquired.
The images of the components at each sampling moment form a time sequence data, and in this embodiment, the loss detection of the components is realized just by continuously analyzing and processing the time sequence data. And time sequence data are used for replacing single picture loss detection, so that the precision can be improved to a certain degree, and the future loss condition can be predicted by analyzing long-term data.
And S102, carrying out feature coding on the image of the component at the current sampling moment to obtain the state feature of the component at the current sampling moment.
In this embodiment, a preset feature coding model may be used to perform feature coding, where the feature coding model needs to be obtained through training and learning in advance, and in this embodiment, data records of the whole loss process of each historical component (i.e., a component that has been used) of the same type as the component may be collected in advance. And taking the data records of the historical components as samples, and training the feature coding model based on a back propagation algorithm.
After the training is finished, the feature coding model is used for carrying out feature coding on the image of the component at the current sampling moment, the feature coding is the extraction of image features, and finally the extracted image features are used as the state features of the component at the current sampling moment.
And S103, performing state fusion between time sequence codes on the state characteristics of the component at the current sampling time to obtain the fusion state characteristics of the component at the current sampling time.
Specifically, the fusion state feature of the component at the previous sampling time may be obtained, and then the fusion state feature of the component at the previous sampling time and the fusion state feature of the component at the current sampling time are subjected to fusion processing to obtain the fusion state feature of the component at the current sampling time. The fusion processing in this embodiment may be performed in a linear manner or a non-linear manner, and for example, the fusion processing may be performed in a weighted average manner.
It is easy to understand that the fusion state feature of the component at the previous sampling time (i.e. the sampling time t-1) is obtained by performing fusion processing on the fusion state feature of the component at the sampling time t-2 and the state feature of the component at the sampling time t-1, and so on. Specifically, the fusion state feature of the component at the first sampling time is the state feature of the component at the first sampling time.
And step S104, acquiring the state mode of the component at the previous sampling moment.
And the state mode of the component at the previous sampling moment is a state mode constructed according to the fusion state characteristics of the component at the previous sampling moment. Specifically, a preset Convolutional Neural Network (CNN) may be used to perform convolution operation and downsampling operation on the fusion state feature of the component at the previous sampling time, and determine a small-scale feature map output by the last layer of the Convolutional Neural network as a state mode of the component at the previous sampling time.
And S105, performing mode matching on the fusion state characteristics of the component at the current sampling time by using the state mode of the component at the previous sampling time to obtain a mode matching result of the component at the current sampling time.
Specifically, the state pattern of the component at the previous sampling time may be used as a convolution kernel, a convolution operation is performed on the fusion state feature of the component at the current sampling time, the convolution operation is used as a pattern matching process, and a result of the convolution operation is used as the pattern matching result.
The mode matching is based on the theory of fine-grained video sequence detection, and the fine-grained loss identification and prediction of industrial components can be achieved. Since the pattern is derived from the state feature representation at the previous moment, the state feature representation at each moment contains the long-term feature records of the data, and the feature diagram to be matched is derived from the state feature representation at the current moment, the pattern matching process simultaneously has the matching of the current state and the matching of the inherited time sequence state information. The state feature representations at different moments are combined by adopting a fine-grained pattern matching mode, the matching mode not only makes full use of long-term feature transformation information of the components, but also achieves fine-grained detection of loss faults of the components at similar time through pattern matching at the latest moment.
And S106, determining a loss detection result of the component at the current sampling moment according to a mode matching result of the component at the current sampling moment.
Preferably, the loss detection result of the component at the next sampling time can be predicted according to the pattern matching result of the component at the current sampling time.
In this embodiment, two Deep Neural Networks (DNNs) may be respectively used to process the pattern matching result of the component at the current sampling time, where the two Deep Neural networks respectively correspond to two different tasks of detecting the loss condition of the current component and predicting the loss condition of the future component, and the loss detection result of the component at the current sampling time and the loss detection result of the component at the next sampling time may be respectively determined. Preferably, the two deep neural networks can be composed of a parameter-sharing full-connection layer and two subtask full-connection layers, so that the parameter burden and the operation consumption of the whole system are reduced. By the mode, the loss detection task of the industrial components is finely divided, the network not only aims at obtaining the current loss condition of the components by analyzing the change of the states of the components along with time, but also predicts the loss of the future components, and the multi-task network construction fully utilizes network parameters and maximally compresses the parameters and accelerates the network.
FIG. 2 is a schematic diagram of the whole processing flow of the embodiment of the present invention, in which CTI is shown t Image representing the component at the sampling time t, H t Representing the state features, F, obtained by feature coding at the sampling instant t t-1 Representing the state pattern, R, obtained from the sampling instant t-1 t Representing at sampling time tResult of pattern matching, S t Representing the loss detection result, P, of the component at the sampling time t t And representing the predicted loss detection result of the component at the sampling time t + 1. By using a Recurrent Neural Network (RNN) as a main Network structure, the method overcomes the obstacle of characteristic combination analysis among time sequence images, realizes long-term tracking analysis of the state of the industrial component, and provides more reliable machine guidance for finding the loss of the component. The current characteristic information of the components is considered, the state information extracted and stored at a plurality of time points is fused, and accurate loss judgment and prediction are given by analyzing the development process and the current state of the loss of the components in the final prediction of the network. By using a fine-grained pattern matching mode, a corresponding pattern is established for the state of the component at the previous moment, and the state is matched and analyzed with the current state, so that the fine-grained detail loss change of the component is obtained to the maximum extent. And moreover, the current state and the past state are fused twice by utilizing RNN and pattern matching, so that information loss is avoided, and information multiplexing is realized.
In summary, in the embodiments of the present invention, an image of a component at a current sampling time is collected; carrying out feature coding on the image of the component at the current sampling moment to obtain the state feature of the component at the current sampling moment; performing state fusion between time sequence codes on the state characteristics of the component at the current sampling moment to obtain the fusion state characteristics of the component at the current sampling moment; acquiring a state mode of the component at the previous sampling moment, wherein the state mode of the component at the previous sampling moment is a state mode constructed according to the fusion state characteristics of the component at the previous sampling moment; and carrying out mode matching on the fusion state characteristics of the component at the current sampling moment by using the state mode of the component at the previous sampling moment to obtain the loss detection result of the component at the current sampling moment. In the whole process, the current state characteristics of the components are considered, the previous state characteristics are fused, the accuracy of the finally obtained loss detection result is greatly improved, and the detection efficiency is greatly improved through a full-automatic detection mode.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 3 shows a structural diagram of an embodiment of a device loss detection apparatus according to an embodiment of the present invention, which corresponds to the device loss detection method described in the foregoing embodiment.
In this embodiment, a device loss detection apparatus may include:
the image acquisition module 301 is used for acquiring an image of the component at the current sampling moment;
the feature coding module 302 is configured to perform feature coding on an image of the component at the current sampling time to obtain a state feature of the component at the current sampling time;
the state fusion module 303 is configured to perform state fusion between time sequence codes on the state features of the component at the current sampling time to obtain fusion state features of the component at the current sampling time;
a state mode obtaining module 304, configured to obtain a state mode of the component at a previous sampling time, where the state mode of the component at the previous sampling time is a state mode constructed according to a fusion state feature of the component at the previous sampling time;
a mode matching module 305, configured to perform mode matching on the fusion state feature of the component at the current sampling time by using a state mode of the component at the previous sampling time, so as to obtain a mode matching result of the component at the current sampling time;
and the detection module 306 is configured to determine a loss detection result of the component at the current sampling time according to a pattern matching result of the component at the current sampling time.
Further, the state fusion module may include:
the fusion state characteristic acquisition unit is used for acquiring the fusion state characteristic of the component at the previous sampling moment;
and the fusion state feature calculation unit is used for performing fusion processing on the fusion state feature of the component at the previous sampling moment and the state feature of the component at the current sampling moment to obtain the fusion state feature of the component at the current sampling moment.
Further, the device loss detection apparatus may further include:
the network processing module is used for carrying out convolution operation and downsampling operation on the fusion state characteristics of the components at the previous sampling moment by using a preset convolution neural network;
and the state mode determining module is used for determining the small-scale characteristic diagram output by the last layer of the convolutional neural network as the state mode of the component at the previous sampling moment.
Further, the pattern matching module is specifically configured to use a state pattern of the component at a previous sampling time as a convolution kernel, perform convolution operation on the fusion state feature of the component at the current sampling time, and use a result of the convolution operation as the pattern matching result.
Further, the device loss detection apparatus may further include:
and the prediction module is used for predicting the loss detection result of the component at the next sampling moment according to the mode matching result of the component at the current sampling moment.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses, modules and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Fig. 4 shows a schematic block diagram of a terminal device according to an embodiment of the present invention, and for convenience of description, only the parts related to the embodiment of the present invention are shown.
As shown in fig. 4, the terminal device 4 of this embodiment includes: a processor 40, a memory 41, and computer readable instructions 42 stored in the memory 41 and executable on the processor 40. The processor 40, when executing the computer readable instructions 42, implements the steps in the above-described embodiments of the component wear detection method, such as the steps S101 to S106 shown in fig. 1. Alternatively, the processor 40 executes the computer readable instructions 42 to implement the functions of the modules/units in the device embodiments, such as the functions of the modules 301 to 306 shown in fig. 3.
Illustratively, the computer readable instructions 42 may be partitioned into one or more modules/units that are stored in the memory 41 and executed by the processor 40 to implement the present invention. The one or more modules/units may be a series of computer-readable instruction segments capable of performing specific functions, which are used for describing the execution process of the computer-readable instructions 42 in the terminal device 4.
The terminal device 4 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. It will be understood by those skilled in the art that fig. 4 is only an example of the terminal device 4, and does not constitute a limitation to the terminal device 4, and may include more or less components than those shown, or combine some components, or different components, for example, the terminal device 4 may further include an input-output device, a network access device, a bus, etc.
The Processor 40 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 41 may be an internal storage unit of the terminal device 4, such as a hard disk or a memory of the terminal device 4. The memory 41 may also be an external storage device of the terminal device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 4. Further, the memory 41 may also include both an internal storage unit and an external storage device of the terminal device 4. The memory 41 is used for storing the computer readable instructions and other programs and data required by the terminal device 4. The memory 41 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the above embodiments may be implemented by hardware that is configured to be instructed by computer readable instructions, which may be stored in a computer readable storage medium, and when the computer readable instructions are executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer readable instructions comprise computer readable instruction code which may be in source code form, object code form, an executable file or some intermediate form, and the like. The computer-readable medium may include: any entity or device capable of carrying the computer-readable instruction code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (5)

1. A component loss detection method is characterized by comprising the following steps:
collecting an image of a component at the current sampling moment;
carrying out feature coding on the image of the component at the current sampling moment to obtain the state feature of the component at the current sampling moment;
performing state fusion between time sequence codes on the state characteristics of the component at the current sampling moment to obtain the fusion state characteristics of the component at the current sampling moment; the method comprises the following steps: acquiring fusion state characteristics of the component at the previous sampling moment; fusing the fusion state characteristics of the component at the previous sampling moment and the state characteristics of the component at the current sampling moment to obtain the fusion state characteristics of the component at the current sampling moment; the state fusion adopts a linear or nonlinear method for fusion processing;
acquiring a state mode of the component at the previous sampling moment, wherein the state mode of the component at the previous sampling moment is a state mode constructed according to the fusion state characteristics of the component at the previous sampling moment; the method comprises the following steps: performing convolution operation and downsampling operation on the fusion state characteristics of the component at the previous sampling moment by using a preset convolution neural network; determining a small-scale characteristic diagram output by the last layer of the convolutional neural network as a state mode of the component at the previous sampling moment;
performing mode matching on the fusion state characteristics of the component at the current sampling moment by using the state mode of the component at the previous sampling moment to obtain a mode matching result of the component at the current sampling moment; the method comprises the following steps: using the state mode of the component at the previous sampling moment as a convolution kernel, performing convolution operation on the fusion state characteristic of the component at the current sampling moment, and using the result of the convolution operation as the mode matching result; the mode is derived from the state characteristic representation of the previous moment, the state characteristic representation of each moment contains the long-term characteristic record of data, the characteristic diagram to be matched is derived from the state characteristic representation of the current moment, and the matching of the current state and the sequential state information inherited by the current state exist in the mode matching process;
and determining a loss detection result of the component at the current sampling moment according to a mode matching result of the component at the current sampling moment.
2. A component loss detection method according to claim 1, further comprising, after obtaining a result of pattern matching of the component at the current sampling time:
and predicting the loss detection result of the component at the next sampling moment according to the mode matching result of the component at the current sampling moment.
3. A component loss detection apparatus that implements the component loss detection method according to any one of claims 1 to 2, comprising:
the image acquisition module is used for acquiring an image of the component at the current sampling moment;
the characteristic coding module is used for carrying out characteristic coding on the image of the component at the current sampling moment to obtain the state characteristic of the component at the current sampling moment;
the state fusion module is used for performing state fusion between time sequence codes on the state characteristics of the component at the current sampling moment to obtain the fusion state characteristics of the component at the current sampling moment; the state fusion adopts a linear or nonlinear method for fusion processing;
the state fusion module comprises: the fusion state characteristic acquisition unit is used for acquiring the fusion state characteristic of the component at the previous sampling moment;
the fusion state feature calculation unit is used for performing fusion processing on the fusion state feature of the component at the previous sampling moment and the state feature of the component at the current sampling moment to obtain the fusion state feature of the component at the current sampling moment;
the state mode acquisition module is used for acquiring a state mode of the component at the previous sampling moment, wherein the state mode of the component at the previous sampling moment is a state mode constructed according to the fusion state characteristics of the component at the previous sampling moment;
the network processing module is used for carrying out convolution operation and downsampling operation on the fusion state characteristics of the components at the previous sampling moment by using a preset convolution neural network;
the state mode determining module is used for determining the small-scale characteristic diagram output by the last layer of the convolutional neural network as the state mode of the component at the previous sampling moment;
the mode matching module is used for performing mode matching on the fusion state characteristics of the component at the current sampling moment by using the state mode of the component at the previous sampling moment to obtain a mode matching result of the component at the current sampling moment; the method comprises the following steps: using the state mode of the component at the previous sampling moment as a convolution kernel, performing convolution operation on the fusion state characteristic of the component at the current sampling moment, and using the result of the convolution operation as the mode matching result;
and the detection module is used for determining the loss detection result of the component at the current sampling moment according to the mode matching result of the component at the current sampling moment.
4. A computer readable storage medium storing computer readable instructions, wherein the computer readable instructions, when executed by a processor, implement the component wear detection method of any one of claims 1 to 2.
5. A terminal device comprising a memory, a processor and computer readable instructions stored in the memory and executable on the processor, wherein the processor when executing the computer readable instructions implements the component wear detection method of any one of claims 1 to 2.
CN201910619351.3A 2019-07-10 2019-07-10 Component loss detection method and device, storage medium and terminal equipment Active CN110487787B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910619351.3A CN110487787B (en) 2019-07-10 2019-07-10 Component loss detection method and device, storage medium and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910619351.3A CN110487787B (en) 2019-07-10 2019-07-10 Component loss detection method and device, storage medium and terminal equipment

Publications (2)

Publication Number Publication Date
CN110487787A CN110487787A (en) 2019-11-22
CN110487787B true CN110487787B (en) 2022-08-12

Family

ID=68546955

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910619351.3A Active CN110487787B (en) 2019-07-10 2019-07-10 Component loss detection method and device, storage medium and terminal equipment

Country Status (1)

Country Link
CN (1) CN110487787B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111156655A (en) * 2019-12-24 2020-05-15 珠海格力电器股份有限公司 Air conditioner main control board fault self-detection method and air conditioner

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109272123A (en) * 2018-08-03 2019-01-25 常州大学 It is a kind of based on convolution-Recognition with Recurrent Neural Network sucker rod pump operating condition method for early warning
CN109325417A (en) * 2018-08-23 2019-02-12 东北大学 A kind of industrial process fault condition diagnostic method based on deep neural network

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102692600B (en) * 2012-06-15 2014-10-15 东华大学 Method and device for rapidly evaluating electrical durability of relay contact based on machine vision
CN104504713B (en) * 2014-12-30 2017-12-15 中国铁道科学研究院电子计算技术研究所 A kind of EMUs running status picture control failure automatic identifying method
CN108334936B (en) * 2018-01-30 2019-12-24 华中科技大学 Fault prediction method based on migration convolutional neural network
CN109359702A (en) * 2018-12-14 2019-02-19 福州大学 Diagnosing failure of photovoltaic array method based on convolutional neural networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109272123A (en) * 2018-08-03 2019-01-25 常州大学 It is a kind of based on convolution-Recognition with Recurrent Neural Network sucker rod pump operating condition method for early warning
CN109325417A (en) * 2018-08-23 2019-02-12 东北大学 A kind of industrial process fault condition diagnostic method based on deep neural network

Also Published As

Publication number Publication date
CN110487787A (en) 2019-11-22

Similar Documents

Publication Publication Date Title
CN108154105B (en) Underwater biological detection and identification method and device, server and terminal equipment
CN110210513B (en) Data classification method and device and terminal equipment
CN110781756A (en) Urban road extraction method and device based on remote sensing image
CN110264270B (en) Behavior prediction method, behavior prediction device, behavior prediction equipment and storage medium
CN110677585A (en) Target detection frame output method and device, terminal and storage medium
CN111079764A (en) Low-illumination license plate image recognition method and device based on deep learning
CN110941978B (en) Face clustering method and device for unidentified personnel and storage medium
CN111191601A (en) Method, device, server and storage medium for identifying peer users
CN115131281A (en) Method, device and equipment for training change detection model and detecting image change
CN110487787B (en) Component loss detection method and device, storage medium and terminal equipment
CN115457364A (en) Target detection knowledge distillation method and device, terminal equipment and storage medium
CN115222061A (en) Federal learning method based on continuous learning and related equipment
CN111488887B (en) Image processing method and device based on artificial intelligence
CN111161789B (en) Analysis method and device for key areas of model prediction
CN112966592A (en) Hand key point detection method, device, equipment and medium
CN110097600B (en) Method and device for identifying traffic sign
CN115690544B (en) Multi-task learning method and device, electronic equipment and medium
CN108733784B (en) Teaching courseware recommendation method, device and equipment
CN115577768A (en) Semi-supervised model training method and device
CN114708260A (en) Image detection method
CN114373071A (en) Target detection method and device and electronic equipment
CN113033397A (en) Target tracking method, device, equipment, medium and program product
CN114332522A (en) Image identification method and device and construction method of residual error network model
CN110765817A (en) Method, device and equipment for selecting crowd counting model and storage medium thereof
CN110837805B (en) Method, device and equipment for measuring confidence of video tag and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201230

Address after: 271100 1340, NO.67 Huiyuan street, Laiwu high tech Zone, Jinan City, Shandong Province

Applicant after: Elite vision technology (Shandong) Co.,Ltd.

Address before: 8 / F, building B, Tongfang information port, No. 11, Langshan Road, Nanshan District, Shenzhen, Guangdong 518000

Applicant before: RISEYE INTELLIGENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220310

Address after: 271100 1340, NO.67 Huiyuan street, Laiwu high tech Zone, Jinan City, Shandong Province

Applicant after: Elite vision technology (Shandong) Co.,Ltd.

Applicant after: SHANDONG RIZHAO POWER GENERATION Co.,Ltd.

Address before: 271100 1340, NO.67 Huiyuan street, Laiwu high tech Zone, Jinan City, Shandong Province

Applicant before: Elite vision technology (Shandong) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant