CN116727381B - Integral acid steaming cleaning device and method thereof - Google Patents

Integral acid steaming cleaning device and method thereof Download PDF

Info

Publication number
CN116727381B
CN116727381B CN202311029194.3A CN202311029194A CN116727381B CN 116727381 B CN116727381 B CN 116727381B CN 202311029194 A CN202311029194 A CN 202311029194A CN 116727381 B CN116727381 B CN 116727381B
Authority
CN
China
Prior art keywords
container
apparent
training
matrix
experimental container
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311029194.3A
Other languages
Chinese (zh)
Other versions
CN116727381A (en
Inventor
石坚
蔡亮
刘建勋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jining Jiude Semiconductor Technology Co ltd
Original Assignee
Jining Jiude Semiconductor Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jining Jiude Semiconductor Technology Co ltd filed Critical Jining Jiude Semiconductor Technology Co ltd
Priority to CN202311029194.3A priority Critical patent/CN116727381B/en
Publication of CN116727381A publication Critical patent/CN116727381A/en
Application granted granted Critical
Publication of CN116727381B publication Critical patent/CN116727381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B08CLEANING
    • B08BCLEANING IN GENERAL; PREVENTION OF FOULING IN GENERAL
    • B08B9/00Cleaning hollow articles by methods or apparatus specially adapted thereto 
    • B08B9/08Cleaning containers, e.g. tanks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/422Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/52Scale-space analysis, e.g. wavelet analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/54Extraction of image or video features relating to texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B08CLEANING
    • B08BCLEANING IN GENERAL; PREVENTION OF FOULING IN GENERAL
    • B08B2230/00Other cleaning aspects applicable to all B08B range
    • B08B2230/01Cleaning with steam

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Mechanical Engineering (AREA)
  • Investigating Or Analyzing Materials Using Thermal Means (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

An integral acid steaming cleaning device and method are disclosed. The device comprises an acid liquor storage tank, a heating device, a cleaning chamber, a steam transmission pipeline, an exhaust system, a camera arranged in the cleaning chamber and a controller, wherein the controller is communicatively connected with the camera and the heating device and is used for controlling the heating power value of the heating device. In this way, it can self-adaptively recommend the heating power of the heating device based on the dirty condition of the cleaned experimental container so that the acid liquor steam can be adapted to the dirty condition of the cleaned experimental container to improve the cleaning efficiency and the sufficiency.

Description

Integral acid steaming cleaning device and method thereof
Technical Field
The present disclosure relates to the field of acid vapor cleaning devices, and more particularly, to an integrated acid vapor cleaning device and method thereof.
Background
The acid steam cleaning device is a device for cleaning an experiment container by utilizing acid steam generated by heating acid liquor. The device can effectively remove residues which are difficult to dissolve in water, such as metal ions, organic matters and the like in the experimental container.
However, the different experimental vessel residues have different solubilities and adhesion, and the current acid steam cleaning equipment generally uses a fixed heating power to control the heating device when cleaning the experimental vessel. Cannot be adjusted according to the dirt degree of the experimental container. This may result in inefficient or excessive cleaning, wasting energy and acid.
Thus, an optimized acid vapor cleaning scheme is desired.
Disclosure of Invention
In view of this, the disclosure proposes an integral acid steam cleaning device and a method thereof, which can adaptively recommend the heating power of a heating device based on the contamination condition of a cleaned experimental container so that acid liquor steam can be adapted to the contamination condition of the cleaned experimental container to improve cleaning efficiency and sufficiency.
According to an aspect of the present disclosure, there is provided an integrated acid vapor cleaning device including an acid liquid storage tank, a heating device, a cleaning chamber, a vapor transmission pipe, and an exhaust system, wherein the integrated acid vapor cleaning device further includes: a camera disposed within the cleaning chamber, and a controller communicatively coupled to the camera and the heating device, the controller for controlling a heating power value of the heating device.
According to another aspect of the present disclosure, there is provided an integral acid vapor cleaning method comprising: acquiring a detection image of the cleaned experimental container acquired by a camera; performing image enhancement on the detection image of the cleaned experimental container to obtain an enhanced detection image of the experimental container; the enhanced experimental container detection image passes through a feature extractor based on a convolutional neural network model to obtain an apparent feature matrix of the experimental container; the apparent characteristic matrix of the experimental container passes through a spatial attention module to obtain an apparent characteristic matrix of the spatial dimension reinforced experimental container; processing the apparent characteristic matrix of the spatial dimension enhanced experimental container by using a gradient weighted activation mapping technology to obtain an apparent characteristic matrix of the display experimental container; performing decoding regression on the apparent characteristic matrix of the visualization experiment container through a decoder to obtain a decoding value, wherein the decoding value is used for representing a recommended heating power value; and controlling heating power of the heating device based on the decoded value.
According to an embodiment of the present disclosure, the apparatus includes an acid storage tank, a heating device, a cleaning chamber, a vapor transmission line and an exhaust system, a camera disposed within the cleaning chamber, and a controller communicatively connected to the camera and the heating device, the controller for controlling a heating power value of the heating device. In this way, it can self-adaptively recommend the heating power of the heating device based on the dirty condition of the cleaned experimental container so that the acid liquor steam can be adapted to the dirty condition of the cleaned experimental container to improve the cleaning efficiency and the sufficiency.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a block diagram of a controller in an integrated acid steam cleaning device according to an embodiment of the present disclosure.
Fig. 2 shows a block diagram of the image feature extraction module in a controller in an integrated acid vapor cleaning device according to an embodiment of the present disclosure.
Fig. 3 shows a block diagram of the vessel cleaning apparent feature extraction unit in a controller in an integrated acid vapor cleaning apparatus according to an embodiment of the present disclosure.
Fig. 4 shows a block diagram of the characterization processing unit in the controller in the integrated acid vapor cleaning device according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of the gradient weighting subunit in a controller in an integrated acid vapor cleaning device, according to an embodiment of the disclosure.
Fig. 6 shows a block diagram of a training module further included in a controller in an integrated acid steam cleaning device, according to an embodiment of the present disclosure.
Fig. 7 shows a flow chart of an integrated acid vapor cleaning method according to an embodiment of the present disclosure.
Fig. 8 shows a schematic architecture diagram of an integrated acid vapor cleaning method according to an embodiment of the present disclosure.
Fig. 9 shows an application scenario diagram of a controller in an integrated acid steam cleaning device according to an embodiment of the present disclosure.
Detailed Description
The following description of the embodiments of the present disclosure will be made clearly and fully with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some, but not all embodiments of the disclosure. All other embodiments, which can be made by one of ordinary skill in the art without undue burden based on the embodiments of the present disclosure, are also within the scope of the present disclosure.
As used in this disclosure and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus.
Various exemplary embodiments, features and aspects of the disclosure will be described in detail below with reference to the drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits well known to those skilled in the art have not been described in detail in order not to obscure the present disclosure.
The present disclosure provides an integral acid steam cleaning device, comprising: an acid liquid storage tank; a heating device; a cleaning chamber; the device comprises a vapor transmission pipeline, an exhaust system, a camera arranged in the cleaning chamber and a controller, wherein the controller is communicatively connected with the camera and the heating device and is used for controlling the heating power value of the heating device. It should be appreciated that the acid storage tank: for storing acidic cleaning solutions, typically a more corrosive solution such as sulfuric acid or hydrochloric acid; heating device: for heating the cleaning solution in the acid storage tank to provide a desired temperature during the cleaning process; cleaning room: is the main space for cleaning, usually a closed container, and the cleaning chamber is internally provided with a supporting structure and a device fixing device for placing objects to be cleaned; vapor transmission pipeline: the device is used for conveying the heated acidic cleaning liquid into the cleaning chamber in a steam form so as to realize the cleaning effect on objects; an exhaust system: the device is used for discharging waste gas and steam generated in the cleaning process so as to ensure the environmental safety in the cleaning chamber; a camera head: the cleaning device is arranged in the cleaning chamber and used for monitoring the cleaning process and providing real-time images or videos; and (3) a controller: and the communication equipment is connected with the camera and the heating device and is used for controlling the heating power value of the heating device so as to adjust the temperature of the cleaning liquid. The components are combined together to realize the functions of the integral acid steaming cleaning device. After the cleaning liquid in the acid liquid storage tank is heated by the heating device, the cleaning liquid is conveyed into the cleaning chamber in a steam form through the steam conveying pipeline. In the cleaning chamber, the camera monitors the cleaning process and controls the heating device through the controller to adjust the temperature of the cleaning liquid, and exhaust gas and steam generated in the cleaning process are discharged through the exhaust system.
In order to solve the technical problems in the background art, the technical concept of the disclosure is to adaptively recommend the heating power of the heating device based on the dirty condition of the cleaned experimental container so that the acid liquor steam can be adapted to the dirty condition of the cleaned experimental container, so as to improve the cleaning efficiency and the sufficiency. That is, the process of acid steam cleaning is monitored in real time, and the real-time dirt apparent characteristic information of the container is extracted to adjust the heating power of the heating device, so that the steam quantity of acid steam is controlled, and the cleaning effect of the experimental container is improved.
Based on this, in the technical scheme of the present disclosure, first, a detection image of the cleaned experimental container acquired by the camera is acquired. Then, the detection image of the cleaned experimental container is subjected to image enhancement (bilateral filtering) to obtain an enhanced detection image of the experimental container. And then, the enhanced experimental container detection image is passed through a feature extractor based on a convolutional neural network model to obtain an experimental container apparent feature matrix. That is, after the detected image is enhanced, the real-time apparent characteristic information about the contamination of the experimental container is captured. It should be noted that bilateral filtering is an image enhancement technique that is used to smooth an image and preserve edge information of the image. It combines the information of the spatial domain and the gray domain, filtering is performed by taking into account the spatial distance between pixels and the gray difference between pixel values. The role of the bilateral filter in image enhancement is to reduce noise in the image and to preserve detailed information of the image. The edge and texture details of the image can be better preserved than other filters without the problem of edge blurring or image distortion. The use of a bilateral filter during the cleaning of the experimental vessel can help to improve the quality of the detected image of the cleaned experimental vessel. By reducing noise and interference in the image, the bilateral filter can enable a subsequent image processing algorithm to more accurately extract apparent characteristic information of the experimental container. Therefore, the efficiency and the accuracy of the cleaning device can be improved, and the experiment container is ensured to be thoroughly cleaned.
Fig. 1 shows a block diagram schematic of a controller in an integrated acid steam cleaning device according to an embodiment of the present disclosure. As shown in fig. 1, a controller 100 in an integrated acid vapor cleaning device according to an embodiment of the present disclosure includes: a detection image acquisition module 110, configured to acquire a detection image of the cleaned experimental container acquired by the camera; the image feature extraction module 120 is configured to perform image feature extraction on the detected image of the cleaned experimental container to obtain a apparent feature matrix of the visualized experimental container; and a heating power value control module 130, configured to determine a heating power value based on the apparent feature matrix of the visualization experiment container.
It is contemplated that during the acid vapor cleaning process, the test images of the experimental vessel being cleaned may exhibit different characteristic distributions in the spatial dimension. For example, the bottom and top of the container may have varying degrees of fouling and sedimentation. Therefore, in the technical solution of the present disclosure, it is desirable to characterize the apparent feature matrix of the experimental vessel extracted by the feature extractor based on the convolutional neural network model, so that the model can pay attention to and highlight the regional features in the apparent feature matrix of the experimental vessel, which are closely related to the cleaning effect. In this way, important features may be more focused and enhanced.
In a specific example of the disclosure, the implementation process of characterizing the apparent feature matrix of the experimental container extracted by the feature extractor based on the convolutional neural network model is as follows: firstly, the apparent characteristic matrix of the experimental container passes through a spatial attention module to obtain the apparent characteristic matrix of the spatial dimension reinforced experimental container; and then, processing the apparent characteristic matrix of the spatial dimension enhanced experimental container by using a gradient weighted activation mapping technology to obtain the apparent characteristic matrix of the visualization experimental container.
Accordingly, in one example, as shown in fig. 2, the image feature extraction module 120 includes: a container-cleaning apparent feature extraction unit 121 for extracting container-cleaning apparent features from the detected image of the experimental container to be cleaned to obtain an experimental container apparent feature matrix; and a feature visualization processing unit 122, configured to perform feature visualization processing on the apparent feature matrix of the experimental container to obtain a visualized apparent feature matrix of the experimental container.
Wherein, as shown in fig. 3, the container cleaning apparent feature extraction unit 121 includes: an image enhancement subunit 1211, configured to perform image enhancement on the detection image of the cleaned experimental container to obtain an enhanced detection image of the experimental container; and a convolution feature extraction subunit 1212 configured to pass the enhanced experimental vessel detection image through a feature extractor based on a convolution neural network model to obtain the experimental vessel apparent feature matrix. It should be appreciated that convolutional neural networks (Convolutional Neural Network, CNN) are a deep learning model, particularly suited for image processing and computer vision tasks. The convolutional neural network model is excellent in tasks such as image recognition, target detection, image classification and the like. The method extracts local features of the image through a plurality of convolution layers and a pooling layer, and performs tasks such as classification or regression through a full connection layer. The convolution layer filters the image and extracts the characteristics through convolution operation, and the pooling layer is used for reducing the dimension and parameter quantity of the characteristic image and enhancing the robustness of the model. In the integrated acid vapor cleaning device, a convolutional neural network model is used as a tool for vessel cleaning apparent feature extraction. Specifically, the image enhancement subunit 1211 performs image enhancement processing on the detected image of the cleaned experimental container to improve the quality and definition of the image. The enhanced experimental vessel inspection image is then processed through a feature extractor based on a convolutional neural network model, from which an apparent feature matrix of the experimental vessel is extracted. These apparent feature matrices may contain information about the shape, texture, color, etc. of the container for subsequent container cleaning analysis and processing. By using a convolutional neural network model, the apparent features of the vessel can be effectively extracted from the image, providing a more accurate and reliable analysis and processing capability for the vessel cleaning device.
As shown in fig. 4, the feature visualization processing unit 122 includes: a spatial attention encoding subunit 1221, configured to pass the apparent feature matrix of the experimental container through a spatial attention module to obtain an apparent feature matrix of the spatial dimension enhanced experimental container; and a gradient weighting subunit 1222 for processing the spatial dimension, the enhanced experimental vessel apparent feature matrix using a gradient weighted activation mapping technique to obtain the visualization experimental vessel apparent feature matrix. It should be appreciated that the spatial attention module is a technique for enhancing image features that can weight features at different spatial locations during feature extraction to increase the attention to important features. In the feature visualization processing unit, the spatial attention encoding subunit uses a spatial attention module to process the apparent feature matrix of the experimental container. Through the spatial attention module, different spatial locations of the experimental vessel apparent feature matrix can be weighted to highlight important feature areas and suppress unimportant background information. Therefore, the apparent characteristics of the displayed experimental container are more outstanding, and the subsequent image analysis and processing tasks are facilitated. The gradient weighting subunit uses gradient weighting activation mapping technology to process the apparent characteristic matrix of the experimental container after spatial attention coding. The gradient weighted activation mapping technology can further enhance the edge and detail information of the features, so that the apparent features of the visualized experimental container are clearer and clearer. The function of the spatial attention module in the feature display processing is to highlight important features and enhance the detail information of images by weighting different spatial positions of the apparent feature matrix of the experimental container, so as to obtain the apparent feature matrix of the display experimental container with better expressive force and distinguishing property.
It is worth mentioning that gradient weighted activation mapping (Gradient Weighted Activation Mapping, grad-CAM) technique is a method for visualizing the attention mechanism of deep neural network models. It may help understand the image area that the model is focusing on when making predictions. In deep neural networks, each neuron responds to some characteristic of the input image to a different degree. Gradient weighted activation mapping techniques measure the importance of a neuron to model prediction by calculating the product of the gradient and activation values of the neuron. This results in an importance weight map for visualizing the degree of interest of the model in the input image. The main function of the gradient weighted activation mapping technology is to explain the prediction result of the model and help understand the decision basis of the model. By visualizing the attention area of the model, the image features that the model is focused on when making predictions can be determined, thereby enhancing the interpretability and interpretability of the model predictions. In the feature visualization process, the gradient weighting subunit uses a gradient weighted activation mapping technique to process the spatial attention-encoded experimental container apparent feature matrix. By calculating the product of the gradient and the activation value, the importance weight of each feature map can be obtained. Therefore, the attention degree of the model to the apparent characteristics of the experimental container can be visualized, the importance of the model to different characteristics in the cleaning process can be understood, and the efficiency and the accuracy of the cleaning device can be further improved.
Further, as shown in fig. 5, the gradient weighting subunit 1222 includes: a gradient value calculation secondary subunit 12221, configured to calculate a gradient value of each feature value in the apparent feature matrix of the spatial dimension enhancement experimental container to obtain a gradient feature matrix; an activation secondary subunit 12222, configured to perform an activation operation on each feature value in the gradient feature matrix by using a ReLU function to obtain an activation feature matrix; a normalization processing second-level subunit 12223, configured to normalize the activation feature matrix to obtain a normalized activation feature matrix; and an element-by-element weighting secondary subunit 12224, configured to perform element-by-element weighting processing on the apparent feature matrix of the spatial dimension enhancement experimental container with each feature value in the normalized activated feature matrix as a weight, so as to obtain the apparent feature matrix of the visualization experimental container. It should be understood that the gradient value refers to the gradient of each feature value in the apparent feature matrix of the computation space dimension enhancement experimental container, the gradient refers to the change rate or the slope of the function at a certain point, and in the deep neural network, the gradient can be used for measuring the sensitivity degree of the model prediction result to the input feature. The gradient value calculation secondary subunit 12221 is configured to calculate a gradient value of each feature value in the apparent feature matrix of the experimental container to obtain a gradient feature matrix, which may be implemented by differentiating the feature values, that is, calculating a gradient of the feature value with respect to the model prediction result. The ReLU function, collectively referred to as the modified linear unit (Rectified Linear Unit), is a commonly used activation function that is widely used in deep neural networks. The ReLU function has the advantages that the ReLU function has a simple calculation form and good nonlinear expression capability, compared with other activation functions (such as sigmoid function and tanh function), the ReLU function can better relieve the gradient disappearance problem in the training process, so that the training speed and performance of a model are improved, in addition, the ReLU function can retain original information input by positive values, extraction of effective feature representation is facilitated, and the activation secondary subunit 12222 uses the ReLU function to perform activation operation on each feature value in the gradient feature matrix to obtain an activation feature matrix. Normalization is the process of converting data into a standard form within a specific range, and in machine learning and data analysis, normalization is typically used to convert data of different scales and units into a unified standard for better comparison and analysis of the data. The main purpose of the normalization process is to eliminate the dimensional differences between the data, so that the comparability between different features is realized, the data can be mapped to a fixed range, for example, between 0 and 1 or between-1 and 1 through the normalization process, the data distribution is more uniform, and the excessive influence of certain features on model training is avoided. The normalization process can enable the model to converge more quickly, because the range of the characteristic value is limited in a smaller interval, the vibration and swing in the gradient descent process are reduced, if the value range of one characteristic is far greater than other characteristics, the model can excessively depend on the characteristic, the influence of other characteristics is ignored, the contribution of each characteristic to the model can be more balanced through the normalization process, and the normalization can also enable the model to noise and abnormality in input data.
And then, carrying out decoding regression on the apparent characteristic matrix of the visualization experiment container through a decoder to obtain a decoding value, wherein the decoding value is used for representing the recommended heating power value. The decoder refers to a part used in the neural network to remap the encoded feature vector to the original data. In the above case, the decoder is configured to decode and return the apparent feature matrix of the experimental container processed by the gradient weighted activation mapping technology to the heating power value. Decoders are often used in models such as automatic encoders. An automatic encoder is an unsupervised learning model whose goal is to encode the input data through the encoder and then reconstruct the original data back through the decoder. The decoder functions to restore the structure and characteristics of the original data so that the reconstructed data is as close as possible to the original data. In this case, the decoder remaps the experimental vessel apparent feature matrix processed by the gradient weighted activation mapping technique to heating power values. The purpose of this is to recommend an appropriate heating power value based on the degree of attention of the model to the apparent characteristics of the experimental vessel. The design and training process of the decoder can be adjusted according to specific tasks and model architecture to obtain the best decoding effect.
Accordingly, the heating power value control module 130 is further configured to: and carrying out decoding regression on the apparent characteristic matrix of the visualization experiment container through a decoder to obtain a decoding value, wherein the decoding value is used for representing the recommended heating power value.
Further, in the technical solution of the present disclosure, the controller further includes a training module for training the feature extractor, the spatial attention module, and the decoder based on the convolutional neural network model. It should be appreciated that the training module plays a vital role in the solution, and is used to train the convolutional neural network model-based feature extractor, spatial attention module, and decoder so that they can work effectively and provide accurate results. The main functions of the training module include the following aspects: 1. the parameter optimization is carried out, the training module enables the model to better fit training data by optimizing the parameters of the model, and the training module can adjust the parameters of the model according to the loss function through a back propagation algorithm and an optimizer, so that the predicted result and the actual result of the model on the training data are as close as possible; 2. model selection, the training module may select appropriate model architecture and components, e.g., convolutional neural network-based feature extractors and decoders, as used herein, and spatial attention modules, all trained and selected by the training module, according to specific task requirements; 3. the super-parameter adjustment, the training module can also be used for adjusting super-parameters of the model, such as learning rate, batch size, regularization parameters and the like, and the optimal super-parameter combination can be found by adjusting the super-parameters in the training process, so that the performance and the robustness of the model are improved; 4. the training module can monitor the performance and progress of the model in the training process and evaluate the model according to the evaluation index, which is helpful for knowing the training condition of the model, finding problems in time and taking corresponding adjustment measures. The training module plays a key role in the technical scheme, and the model can adapt to the requirements of specific tasks and provides accurate and reliable results through training and parameter optimization of the model.
As shown in fig. 6, the training module 200 includes: a training data acquisition unit 210 for acquiring training data including a training detection image of the cleaned experimental container acquired by the camera and a true value of the recommended heating power value; a training image enhancement unit 220, configured to perform image enhancement on the training detection image of the cleaned experimental container to obtain a training enhanced experimental container detection image; a training convolutional feature extraction unit 230, configured to pass the training enhanced experimental container detection image through the convolutional neural network model-based feature extractor to obtain a training experimental container apparent feature matrix; a training spatial attention encoding unit 240, configured to pass the training experimental container apparent feature matrix through the spatial attention module to obtain a training spatial dimension enhancement experimental container apparent feature matrix; the training gradient weighting unit 250 is configured to process the training space dimension enhancement experimental container apparent feature matrix by using a gradient weighting activation mapping technology to obtain a training visualization experimental container apparent feature matrix; a training decoding unit 260, configured to decode the training explicit experimental container apparent feature matrix by using a decoder; a factor calculation unit 270, configured to calculate a consistency factor of convex decomposition of a feature matrix manifold of the training visualization experiment container apparent feature matrix; and a loss training unit 280 for training the convolutional neural network model-based feature extractor, the spatial attention module, and the decoder with a weighted sum of the decoding loss function value and the convex decomposition consistency factor as a loss function value.
In the technical scheme of the disclosure, after the training enhancement experimental container detection image passes through the feature extractor based on the convolutional neural network model, the training experimental container apparent feature matrix can express local associated features of image source semantics of the training enhancement experimental container detection image, and after the training enhancement experimental container detection image passes through the spatial attention module, image semantic feature distribution under certain local parts can be further enhanced, so that the local feature expression effect of the training spatial dimension enhancement experimental container apparent feature matrix is enhanced.
And after the apparent feature matrix of the training space dimension strengthening experimental container is processed by using a gradient weighted activation mapping technology, the obtained apparent feature matrix of the training visualization experimental container can further strengthen the distribution difference boundary between differential local feature distribution so as to further strengthen local feature expression, but the apparent feature matrix of the training visualization experimental container has poor consistency in the space distribution dimension, so that the regression effect of decoding regression through a decoder is affected.
Thus, in order to keep manifold expressions of the training visualization experimental vessel apparent feature matrix in a high-dimensional feature space consistent across different spatial distribution dimensions corresponding to row and column directions, the applicant of the present disclosure is directed to the training visualization experimental vessel apparent feature matrixThe convex decomposition consistency factor of the feature matrix manifold is introduced as a loss function.
Accordingly, in a specific example, the factor calculating unit 270 is configured to: calculating the consistency factor of convex decomposition of the feature matrix manifold of the apparent feature matrix of the training visualization experiment container according to the following factor calculation formula; wherein, the factor calculation formula is: wherein ,is the apparent characteristic matrix of the training visualization experiment container>Characteristic value of the location-> and />The mean value vector of the row vector and the mean value vector of the column vector corresponding to the apparent feature matrix of the training visualization experiment container are respectively->Representing a norm of the vector,/->Frobenius norms of the matrix are represented, < >> and />Is the apparent characteristic matrix of the training visualization experiment container>Width and height of (2), and->、/> and />Is a weight superparameter,/->Representing vector multiplication, ++>And the convex decomposition consistency factor of the characteristic matrix manifold of the apparent characteristic matrix of the training visualization experiment container is represented.
That is, the apparent feature matrix of the training visualization experiment container is consideredSeparate spatial dimension representations of row and column dimensions, said manifold convex decomposition consistency factor being +.>The differences in distribution in the sub-dimensions represented by rows and columns are visualized by the training the experimental vessel apparent feature matrix +.>Flattening the set of finite convex polynomials of the represented feature manifold and constraining the geometric convex decomposition in the form of sub-dimension-associated shape weights to facilitate the training of the visualization experiment container apparent feature matrix>The consistency of convex geometric representation of the feature manifold in the decomposable dimension represented by the rows and the columns is ensured, so that manifold representation of the apparent feature matrix of the training and developing experimental container in the high-dimensional feature space is kept consistent in different distribution dimensions corresponding to the row direction and the column direction, and the regression effect of decoding regression of the apparent feature matrix of the training and developing experimental container through a decoder is improved.
In summary, the controller 100 in the integrated acid vapor cleaning device according to the embodiment of the present disclosure is illustrated, which may adaptively recommend the heating power of the heating device based on the contamination condition of the cleaned experimental container so that the acid liquid vapor can be adapted to the contamination condition of the cleaned experimental container to improve the cleaning efficiency and the sufficiency.
As described above, the controller 100 in the integrated acid vapor cleaning apparatus according to the embodiment of the present disclosure may be implemented in various terminal devices, such as a server having an integrated acid vapor cleaning algorithm, and the like. In one example, the controller 100 in the integrated acid vapor cleaning apparatus may be integrated into the terminal device as a software module and/or hardware module. For example, the controller 100 in the integrated acid vapor cleaning apparatus may be a software module in the operating system of the terminal device, or may be an application developed for the terminal device; of course, the controller 100 in the integrated acid vapor cleaning apparatus may also be one of the numerous hardware modules of the terminal device.
Alternatively, in another example, the controller 100 in the integrated acid vapor cleaning apparatus and the terminal device may be separate devices, and the controller 100 in the integrated acid vapor cleaning apparatus may be connected to the terminal device through a wired and/or wireless network and transmit the interactive information in a agreed data format.
Fig. 7 shows a flow chart of an integrated acid vapor cleaning method according to an embodiment of the present disclosure. Fig. 8 shows a schematic diagram of a system architecture of an integrated acid vapor cleaning method according to an embodiment of the present disclosure. As shown in fig. 7 and 8, an integrated acid vapor cleaning method according to an embodiment of the present disclosure includes: s110, acquiring a detection image of a cleaned experimental container acquired by a camera; s120, carrying out image enhancement on the detection image of the cleaned experimental container to obtain an enhanced experimental container detection image; s130, passing the enhanced experimental container detection image through a feature extractor based on a convolutional neural network model to obtain an apparent feature matrix of the experimental container; s140, passing the apparent characteristic matrix of the experimental container through a spatial attention module to obtain an apparent characteristic matrix of the spatial dimension reinforced experimental container; s150, processing the apparent characteristic matrix of the spatial dimension enhanced experimental container by using a gradient weighted activation mapping technology to obtain an apparent characteristic matrix of the visualized experimental container; s160, carrying out decoding regression on the apparent characteristic matrix of the visualization experiment container through a decoder to obtain a decoding value, wherein the decoding value is used for representing a recommended heating power value; and S170, controlling the heating power of the heating device based on the decoded value.
Here, it will be understood by those skilled in the art that the specific operations of the respective steps in the above-described integrated acid steam cleaning method have been described in detail in the above description of the controller in the integrated acid steam cleaning apparatus with reference to fig. 1 to 6, and thus, repetitive descriptions thereof will be omitted.
Fig. 9 shows an application scenario diagram of a controller in an integrated acid steam cleaning device according to an embodiment of the present disclosure. As shown in fig. 9, in this application scenario, first, a detected image of a cleaned experimental container (e.g., D shown in fig. 9) acquired by a camera is acquired, then, the detected image of the cleaned experimental container is input into a server (e.g., S shown in fig. 9) in which an integral acid vapor cleaning algorithm is deployed, wherein the server can process the detected image of the cleaned experimental container using the integral acid vapor cleaning algorithm to obtain a decoded value for representing a recommended heating power value, and finally, the heating power of the heating device is controlled based on the decoded value.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The foregoing description of the embodiments of the present disclosure has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (3)

1. The utility model provides an integral acid evaporates belt cleaning device, includes acidizing fluid storage jar, heating device, washroom, steam transmission pipeline and exhaust system, its characterized in that, integral acid evaporates belt cleaning device still includes: a camera disposed within the cleaning chamber, and a controller communicatively coupled to the camera and the heating device, the controller for controlling a heating power value of the heating device;
wherein, the controller includes: the detection image acquisition module is used for acquiring detection images of the cleaned experimental container acquired by the camera; the image feature extraction module is used for extracting image features of the detection images of the cleaned experimental container to obtain an apparent feature matrix of the display experimental container; the heating power value control module is used for determining the heating power value based on the apparent characteristic matrix of the visualization experiment container;
wherein, the image feature extraction module includes: a container cleaning apparent feature extraction unit for extracting container cleaning apparent features from the detection image of the cleaned experimental container to obtain an experimental container apparent feature matrix; the characteristic display processing unit is used for carrying out characteristic display processing on the apparent characteristic matrix of the experimental container so as to obtain an apparent characteristic matrix of the display experimental container;
wherein the container cleaning apparent feature extraction unit includes: the image enhancement subunit is used for enhancing the image of the detection image of the cleaned experimental container to obtain an enhanced detection image of the experimental container; the convolution feature extraction subunit is used for enabling the enhanced experimental container detection image to pass through a feature extractor based on a convolution neural network model so as to obtain an apparent feature matrix of the experimental container;
wherein, the characteristic display processing unit includes: the space attention coding subunit is used for passing the apparent characteristic matrix of the experimental container through a space attention module to obtain the apparent characteristic matrix of the spatial dimension reinforced experimental container; the gradient weighting subunit is used for processing the apparent characteristic matrix of the spatial dimension enhancement experimental container by using a gradient weighting activation mapping technology so as to obtain the apparent characteristic matrix of the visualization experimental container;
wherein the gradient weighting subunit comprises: the gradient value calculation secondary subunit is used for calculating the gradient value of each characteristic value in the apparent characteristic matrix of the spatial dimension reinforcement experiment container so as to obtain a gradient characteristic matrix; the activation secondary subunit is used for performing activation operation on each characteristic value in the gradient characteristic matrix by using a ReLU function so as to obtain an activation characteristic matrix; the normalization processing second-level subunit is used for performing normalization processing on the activated feature matrix to obtain a normalized activated feature matrix; the element-by-element weighting secondary subunit is used for carrying out element-by-element weighting treatment on the apparent characteristic matrix of the spatial dimension reinforcement experimental container by taking each characteristic value in the normalized activated characteristic matrix as a weight so as to obtain the apparent characteristic matrix of the display experimental container;
the controller further comprises a training module for training the feature extractor, the spatial attention module and the decoder based on the convolutional neural network model; wherein, training module includes: the training data acquisition unit is used for acquiring training data, wherein the training data comprises training detection images of the cleaned experimental container acquired by the camera and a recommended heating power value; the training image enhancement unit is used for enhancing the training detection image of the cleaned experimental container to obtain a training enhanced experimental container detection image; the training convolutional feature extraction unit is used for enabling the training enhancement experimental container detection image to pass through the feature extractor based on the convolutional neural network model so as to obtain a training experimental container apparent feature matrix; the training space attention coding unit is used for passing the training experiment container apparent characteristic matrix through the space attention module to obtain a training space dimension strengthening experiment container apparent characteristic matrix; the training gradient weighting unit is used for processing the training space dimension reinforcement experimental container apparent feature matrix by using a gradient weighting activation mapping technology so as to obtain a training visualization experimental container apparent feature matrix; the training decoding unit is used for decoding the apparent feature matrix of the training visualization experiment container through a decoder; the factor calculation unit is used for calculating the convex decomposition consistency factor of the feature matrix manifold of the training visualization experiment container apparent feature matrix; and a loss training unit for training the convolutional neural network model-based feature extractor, the spatial attention module, and the decoder with a weighted sum of the decoding loss function value and the convex decomposition consistency factor as a loss function value;
wherein the factor calculation unit is configured to: calculating the consistency factor of convex decomposition of the feature matrix manifold of the apparent feature matrix of the training visualization experiment container according to the following factor calculation formula; wherein, the factor calculation formula is:
wherein ,is the apparent characteristic matrix of the training visualization experiment container>Characteristic value of the location->Andthe mean value vector of the row vector and the mean value vector of the column vector corresponding to the apparent feature matrix of the training visualization experiment container are respectively->Representing a norm of the vector,/->Frobenius norms of the matrix are represented, < >> and />Is the apparent characteristic matrix of the training visualization experiment container>Width and height of (2), and->、/> and />Is a weight superparameter,/->Representing vector multiplication, ++>And the convex decomposition consistency factor of the characteristic matrix manifold of the apparent characteristic matrix of the training visualization experiment container is represented.
2. The integrated acid steam cleaning device of claim 1, wherein the heating power value control module is further configured to: and carrying out decoding regression on the apparent characteristic matrix of the visualization experiment container through a decoder to obtain a decoding value, wherein the decoding value is used for representing the recommended heating power value.
3. A method of integral acid steam cleaning using the cleaning apparatus of any one of claims 1-2, comprising: acquiring a detection image of the cleaned experimental container acquired by a camera; performing image enhancement on the detection image of the cleaned experimental container to obtain an enhanced detection image of the experimental container; the enhanced experimental container detection image passes through a feature extractor based on a convolutional neural network model to obtain an apparent feature matrix of the experimental container; the apparent characteristic matrix of the experimental container passes through a spatial attention module to obtain an apparent characteristic matrix of the spatial dimension reinforced experimental container; processing the apparent characteristic matrix of the spatial dimension enhanced experimental container by using a gradient weighted activation mapping technology to obtain an apparent characteristic matrix of the display experimental container; performing decoding regression on the apparent characteristic matrix of the visualization experiment container through a decoder to obtain a decoding value, wherein the decoding value is used for representing a recommended heating power value; and controlling heating power of the heating device based on the decoded value.
CN202311029194.3A 2023-08-16 2023-08-16 Integral acid steaming cleaning device and method thereof Active CN116727381B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311029194.3A CN116727381B (en) 2023-08-16 2023-08-16 Integral acid steaming cleaning device and method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311029194.3A CN116727381B (en) 2023-08-16 2023-08-16 Integral acid steaming cleaning device and method thereof

Publications (2)

Publication Number Publication Date
CN116727381A CN116727381A (en) 2023-09-12
CN116727381B true CN116727381B (en) 2023-11-03

Family

ID=87902999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311029194.3A Active CN116727381B (en) 2023-08-16 2023-08-16 Integral acid steaming cleaning device and method thereof

Country Status (1)

Country Link
CN (1) CN116727381B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105344683A (en) * 2015-12-09 2016-02-24 郝东辉 Acid-steaming washer and acid-steaming washing method
CN105382000A (en) * 2015-12-09 2016-03-09 郝东辉 Acid steaming cleaning device provided with integrated liquid-level tube, liquid filling funnel and liquid waste discharging valve
CN107485356A (en) * 2017-09-01 2017-12-19 佛山市顺德区美的洗涤电器制造有限公司 The control method of washing and device and dish-washing machine of dish-washing machine
CN109376599A (en) * 2018-09-19 2019-02-22 中国科学院东北地理与农业生态研究所 A kind of remote sensing image processing method and system extracted towards wetland information
CN116019419A (en) * 2022-12-06 2023-04-28 北京理工大学 Dynamic closed-loop brain function topological graph measurement and parting system for tactile perception

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111860612B (en) * 2020-06-29 2021-09-03 西南电子技术研究所(中国电子科技集团公司第十研究所) Unsupervised hyperspectral image hidden low-rank projection learning feature extraction method
CN113393025A (en) * 2021-06-07 2021-09-14 浙江大学 Non-invasive load decomposition method based on Informer model coding structure

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105344683A (en) * 2015-12-09 2016-02-24 郝东辉 Acid-steaming washer and acid-steaming washing method
CN105382000A (en) * 2015-12-09 2016-03-09 郝东辉 Acid steaming cleaning device provided with integrated liquid-level tube, liquid filling funnel and liquid waste discharging valve
CN107485356A (en) * 2017-09-01 2017-12-19 佛山市顺德区美的洗涤电器制造有限公司 The control method of washing and device and dish-washing machine of dish-washing machine
CN109376599A (en) * 2018-09-19 2019-02-22 中国科学院东北地理与农业生态研究所 A kind of remote sensing image processing method and system extracted towards wetland information
CN116019419A (en) * 2022-12-06 2023-04-28 北京理工大学 Dynamic closed-loop brain function topological graph measurement and parting system for tactile perception

Also Published As

Publication number Publication date
CN116727381A (en) 2023-09-12

Similar Documents

Publication Publication Date Title
CN110796637A (en) Training and testing method and device of image defect detection model and storage medium
CN114902279A (en) Automated defect detection based on machine vision
Morales et al. Feature forwarding for efficient single image dehazing
Chen et al. Image‐denoising algorithm based on improved K‐singular value decomposition and atom optimization
CN108399620B (en) Image quality evaluation method based on low-rank sparse matrix decomposition
CN109740254B (en) Ship diesel engine abrasive particle type identification method based on information fusion
WO2022116616A1 (en) Behavior recognition method based on conversion module
CN111402249B (en) Image evolution analysis method based on deep learning
Gan et al. AutoBCS: Block-based image compressive sensing with data-driven acquisition and noniterative reconstruction
CN116727381B (en) Integral acid steaming cleaning device and method thereof
CN113538258B (en) Mask-based image deblurring model and method
Dorta et al. Training vaes under structured residuals
Chandra et al. Neural network trained morphological processing for the detection of defects in woven fabric
Lei et al. Adaptive convolution confidence sieve learning for surface defect detection under process uncertainty
Huang et al. Underwater image enhancement via LBP‐based attention residual network
CN113034371A (en) Infrared and visible light image fusion method based on feature embedding
CN114550460B (en) Rail transit anomaly detection method and device and storage medium
Browne et al. Wavelet entropy-based feature extraction for crack detection in sewer pipes
JPWO2021181627A5 (en) Image processing device, image recognition system, image processing method and image processing program
Shaliniswetha et al. RESIDUAL LEARNING BASED IMAGE DENOISING AND COMPRESSION USING DNCNN.
CN117252881B (en) Bone age prediction method, system, equipment and medium based on hand X-ray image
CN113435455B (en) Image contour extraction method based on space-time pulse coding
Shah Iris recognition through transfer learning and exponential scaling
Wang Neural Networks-based Image Denoising Methods
CN117409376B (en) Infrared online monitoring method and system for high-voltage sleeve

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant