CN117288830A - Battery quality on-line detection method, device and equipment - Google Patents

Battery quality on-line detection method, device and equipment Download PDF

Info

Publication number
CN117288830A
CN117288830A CN202311189444.XA CN202311189444A CN117288830A CN 117288830 A CN117288830 A CN 117288830A CN 202311189444 A CN202311189444 A CN 202311189444A CN 117288830 A CN117288830 A CN 117288830A
Authority
CN
China
Prior art keywords
ultrasonic signal
target
feature
acoustic
features
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311189444.XA
Other languages
Chinese (zh)
Inventor
薛志祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202311189444.XA priority Critical patent/CN117288830A/en
Publication of CN117288830A publication Critical patent/CN117288830A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/02Analysing fluids
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/1702Systems in which incident light is modified in accordance with the properties of the material investigated with opto-acoustic detection, e.g. for gases or analysing solids
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/02Analysing fluids
    • G01N29/024Analysing fluids by measuring propagation velocity or propagation time of acoustic waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/02Analysing fluids
    • G01N29/028Analysing fluids by measuring mechanical or acoustic impedance
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/02Analysing fluids
    • G01N29/032Analysing fluids by measuring attenuation of acoustic waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/4472Mathematical theories or simulation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N29/00Investigating or analysing materials by the use of ultrasonic, sonic or infrasonic waves; Visualisation of the interior of objects by transmitting ultrasonic or sonic waves through the object
    • G01N29/44Processing the detected response signal, e.g. electronic circuits specially adapted therefor
    • G01N29/4481Neural networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/01Indexing codes associated with the measuring variable
    • G01N2291/011Velocity or travel time
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/01Indexing codes associated with the measuring variable
    • G01N2291/015Attenuation, scattering
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/01Indexing codes associated with the measuring variable
    • G01N2291/018Impedance
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N2291/00Indexing codes associated with group G01N29/00
    • G01N2291/02Indexing codes associated with the analysed material
    • G01N2291/022Liquids
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02EREDUCTION OF GREENHOUSE GAS [GHG] EMISSIONS, RELATED TO ENERGY GENERATION, TRANSMISSION OR DISTRIBUTION
    • Y02E60/00Enabling technologies; Technologies with a potential or indirect contribution to GHG emissions mitigation
    • Y02E60/10Energy storage using batteries

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Pathology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Immunology (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Algebra (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Investigating Or Analyzing Materials By The Use Of Ultrasonic Waves (AREA)

Abstract

The application provides a battery quality online detection method, device and equipment, wherein the method comprises the following steps: acquiring an initial ultrasonic signal of the surface of the slurry to be detected through non-contact; the slurry to be detected is obtained by mixing and stirring a positive electrode solid-state battery material and a negative electrode solid-state battery material; acquiring a target ultrasonic signal corresponding to the initial ultrasonic signal; acquiring acoustic features corresponding to the target ultrasonic signals and signal features corresponding to the target ultrasonic signals, and fusing the acoustic features and the signal features to obtain fused features; predicting material components corresponding to the slurry to be detected based on the fused features; and detecting the battery quality corresponding to the slurry to be detected based on the material composition. Through the technical scheme of this application, can gather ultrasonic signal, extract the characteristic of different material compositions based on ultrasonic signal, realize battery quality on-line measuring in thick liquids stirring in-process, show promotion battery quality.

Description

Battery quality on-line detection method, device and equipment
Technical Field
The present disclosure relates to the field of battery management technologies, and in particular, to a method, an apparatus, and a device for online detection of battery quality.
Background
Slurry stirring is a key step in the lithium battery manufacturing process, and is a starting point of a front-stage process, and is a precursor basis for finishing subsequent coating, rolling and other processes, so that the battery quality is often determined. Referring to fig. 1, a schematic flow chart of slurry stirring is shown, the positive electrode solid-state battery material and the negative electrode solid-state battery material can be mixed to obtain slurry, and then the slurry is stirred by adopting the modes of blade revolution, dispersion disk rotation and the like, and a solvent is added in the stirring process to stir into slurry. After stirring is completed, the material may be discharged.
Different degrees of agitation can affect the material composition of the slurry and, in turn, the battery quality. For example, if the stirring is not uniform, the material composition of the slurry may not conform to the expected material composition, which in turn may lead to poor battery quality. However, how to detect the battery quality, there is no reliable detection method in the related art.
Disclosure of Invention
The application provides a battery quality online detection method, which comprises the following steps:
acquiring an initial ultrasonic signal of the surface of the slurry to be detected through non-contact; the slurry to be detected is obtained by mixing and stirring a positive electrode solid-state battery material and a negative electrode solid-state battery material;
Acquiring a target ultrasonic signal corresponding to the initial ultrasonic signal;
acquiring acoustic features corresponding to the target ultrasonic signals and signal features corresponding to the target ultrasonic signals, and fusing the acoustic features and the signal features to obtain fused features;
predicting material components corresponding to the slurry to be detected based on the fused features;
and detecting the battery quality corresponding to the slurry to be detected based on the material composition.
The application provides a battery quality on-line measuring device, the device includes:
the acquisition module is used for acquiring an initial ultrasonic signal of the surface of the slurry to be detected through non-contact; the slurry to be detected is obtained by mixing and stirring a positive electrode solid-state battery material and a negative electrode solid-state battery material;
the acquisition module is used for acquiring a target ultrasonic signal corresponding to the initial ultrasonic signal;
the processing module is used for acquiring acoustic features corresponding to the target ultrasonic signals and signal features corresponding to the target ultrasonic signals, and fusing the acoustic features and the signal features to obtain fused features; predicting material components corresponding to the slurry to be detected based on the fused features;
And the detection module is used for detecting the battery quality corresponding to the slurry to be detected based on the material composition.
The application provides an electronic device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute the machine executable instructions to implement the battery quality online detection method of the above example.
According to the technical scheme, in the embodiment of the application, the ultrasonic signals on the surface of the slurry to be detected can be acquired in a non-contact mode, the material components corresponding to the slurry to be detected are predicted based on the ultrasonic signals, and the battery quality corresponding to the slurry to be detected is detected based on the material components, so that a reliable detection mode of the battery quality is provided, and the battery quality corresponding to the slurry to be detected can be accurately detected. The method has the advantages that the laser ultrasonic is adopted to collect real-time signals in the slurry stirring process, the machine learning algorithm is adopted to analyze and judge the material components in real time, so that the quality of the slurry stirring process is detected on line, ultrasonic signals can be well collected under the condition that the slurry stirring process cannot be contacted, the characteristics of different material components are extracted based on the ultrasonic signals, the battery quality on line detection is realized in the slurry stirring process, and the battery quality can be remarkably improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly describe the drawings that are required to be used in the embodiments of the present application or the description in the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may also be obtained according to these drawings of the embodiments of the present application for a person having ordinary skill in the art.
FIG. 1 is a schematic flow diagram of slurry agitation in one embodiment of the present application;
FIG. 2 is a flow chart of a battery quality online detection method in one embodiment of the present application;
FIG. 3 is a schematic diagram of a battery quality online detection system in one embodiment of the present application;
FIG. 4 is a schematic diagram of a training process for a machine learning model in one embodiment of the present application;
FIG. 5 is a schematic diagram of a battery quality online detection process in one embodiment of the present application;
FIGS. 6A and 6B are schematic structural diagrams of a feature fusion network in one embodiment of the present application;
FIG. 7 is a schematic view of the structure of a battery quality online detection device in one embodiment of the present application;
fig. 8 is a hardware configuration diagram of an electronic device in an embodiment of the present application.
Detailed Description
The terminology used in the embodiments of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to any or all possible combinations including one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in embodiments of the present application to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, a first message may also be referred to as a second message, and similarly, a second message may also be referred to as a first message, without departing from the scope of the present application. Depending on the context, furthermore, the word "if" used may be interpreted as "at … …" or "at … …" or "in response to a determination".
An embodiment of the present application proposes a method for online detecting battery quality, as shown in fig. 2, including:
Step 201, acquiring an initial ultrasonic signal of the surface of the slurry to be detected through non-contact; the slurry to be detected is obtained by mixing and stirring a positive electrode solid-state battery material and a negative electrode solid-state battery material.
Step 202, obtaining a target ultrasonic signal corresponding to the initial ultrasonic signal.
Step 203, acquiring an acoustic feature corresponding to the target ultrasonic signal and a signal feature corresponding to the target ultrasonic signal, and fusing the acoustic feature and the signal feature to obtain a fused feature.
And 204, predicting the material composition corresponding to the slurry to be detected based on the fused characteristics.
Step 205, detecting the battery quality corresponding to the slurry to be detected based on the material composition.
Illustratively, the initial ultrasonic signal of the surface of the slurry to be inspected is acquired by non-contact, which may include, but is not limited to: transmitting pulse laser to the surface of the slurry to be detected through a pulse laser, so that the slurry to be detected generates an initial ultrasonic signal through a thermoelastic effect after receiving the pulse laser; and acquiring an initial ultrasonic signal reflected by the surface of the slurry to be detected by a vibration meter in a non-contact mode.
By way of example, the initial ultrasound signal may include, but is not limited to, an ultrasound a-scan signal.
Exemplary, acquiring the target ultrasonic signal corresponding to the initial ultrasonic signal may include, but is not limited to: removing useless signals in the initial ultrasonic signals through a band-pass filter; and carrying out envelope processing on the ultrasonic signal with the useless signal eliminated to obtain the target ultrasonic signal.
Illustratively, acquiring the acoustic characteristic corresponding to the target ultrasonic signal and the signal characteristic corresponding to the target ultrasonic signal may include, but is not limited to: determining a target sound attenuation corresponding to the target ultrasonic signal based on a priori functional relation between the ultrasonic signal and the sound attenuation; determining a target sound velocity corresponding to the target ultrasonic signal based on the prior function relation between the ultrasonic signal and the sound velocity; determining a target acoustic impedance corresponding to the target ultrasonic signal based on the prior function relation of the ultrasonic signal and the acoustic impedance; and determining the corresponding acoustic characteristics of the target ultrasonic signal based on the target acoustic attenuation, the target sound velocity and the target acoustic impedance.
The target ultrasonic signal is input to a trained feature extractor, the feature extractor extracts the signal features corresponding to the target ultrasonic signal, and the signal features are output.
Illustratively, fusing the acoustic feature and the signal feature to obtain a fused feature may include, but is not limited to: if the acoustic feature comprises a plurality of acoustic sub-features, aiming at each acoustic sub-feature, carrying out matrix multiplication on the acoustic sub-feature and the signal feature, and carrying out self-attention processing on the features after matrix multiplication to obtain an attention weight matrix corresponding to the acoustic sub-feature; and multiplying the attention weight matrix by the target ultrasonic signal to obtain a fusion vector corresponding to the acoustic sub-feature. And carrying out feature fusion on the fusion vector corresponding to each acoustic sub-feature to obtain a fused feature.
Exemplary, feature fusion is performed on the fusion vector corresponding to each acoustic sub-feature to obtain a fused feature, which may include, but is not limited to: the fusion vectors corresponding to the acoustic sub-features are arranged and combined to obtain a plurality of vector combinations, wherein each vector combination comprises two fusion vectors; for each vector combination, performing matrix multiplication on two fusion vectors in the vector combination, and performing self-attention processing on the characteristics after matrix multiplication to obtain an attention weight matrix corresponding to the vector combination; and multiplying the attention weight matrix by the target ultrasonic signal to obtain a fusion vector corresponding to the vector combination. And weighting the fusion vector corresponding to each vector combination to obtain the fused characteristic.
Illustratively, predicting a material composition corresponding to the slurry to be detected based on the post-fusion features may include, but is not limited to: and inputting the fused characteristics into a trained material component classifier, and predicting and outputting the material components corresponding to the slurry to be detected by the material component classifier based on the fused characteristics.
Illustratively, detecting the battery quality corresponding to the slurry to be detected based on the material composition may include, but is not limited to: if the material composition meets the expected material composition, determining that the battery quality meets the discharging condition; if the material composition does not meet the expected material composition, determining that the battery quality does not meet the discharging condition.
According to the technical scheme, in the embodiment of the application, the ultrasonic signals on the surface of the slurry to be detected can be acquired in a non-contact mode, the material components corresponding to the slurry to be detected are predicted based on the ultrasonic signals, and the battery quality corresponding to the slurry to be detected is detected based on the material components, so that a reliable detection mode of the battery quality is provided, and the battery quality corresponding to the slurry to be detected can be accurately detected. The method has the advantages that the laser ultrasonic is adopted to collect real-time signals in the slurry stirring process, the machine learning algorithm is adopted to analyze and judge the material components in real time, so that the quality of the slurry stirring process is detected on line, ultrasonic signals can be well collected under the condition that the slurry stirring process cannot be contacted, the characteristics of different material components are extracted based on the ultrasonic signals, the battery quality on line detection is realized in the slurry stirring process, and the battery quality can be remarkably improved.
Referring to fig. 1, a schematic flow chart of slurry stirring is shown, the positive electrode solid-state battery material and the negative electrode solid-state battery material can be mixed to obtain slurry, and the slurry is stirred by adopting the modes of blade revolution, dispersion disk rotation and the like, and the solvent is added and stirred into slurry in the stirring process. After stirring is completed, the material may be discharged.
However, different degrees of agitation can affect the material composition of the slurry, which in turn affects the cell quality. For example, if the stirring is not uniform, the material composition of the slurry may not conform to the expected material composition, which in turn may lead to poor battery quality. However, there is no reliable detection method in the related art how to detect the battery quality.
Aiming at the discovery, in the embodiment of the application, the laser ultrasonic is adopted to collect real-time signals of the slurry stirring process, and a machine learning algorithm (such as a deep learning algorithm) is adopted to analyze and judge the material components in real time, so that the quality of the slurry stirring process is detected on line, ultrasonic signals can be well collected under the condition that the slurry stirring process cannot be contacted, the characteristics of different material components are extracted based on the ultrasonic signals, the battery quality on line detection is realized in the slurry stirring process, and the battery quality can be remarkably improved.
Referring to fig. 3, a schematic diagram of a battery quality online detection system based on laser ultrasound and machine learning is shown, which may include a pulsed laser, a vibration meter, and electronics. The electronic device may be any type of device, such as a personal computer, a terminal device, a server, a notebook computer, a smart phone, an internet of things device, and the like, and the type of the electronic device is not limited.
The pulse laser is used for emitting pulse laser to the surface of the slurry, ultrasonic waves are generated through the thermoelastic effect, and the vibration meter is used for receiving the generated ultrasonic signals, so that the collection of the surface signals of the slurry can be realized on the surface of a non-contact material, and the method is a non-contact ultrasonic detection method. The electronic device may support a machine learning model (e.g., a deep learning model) and may implement the material composition analysis based on the machine learning model.
In the embodiment of the application, the method can relate to a training process of a machine learning model and a battery quality online detection process based on the machine learning model, wherein the machine learning model can be a deep learning model or a neural network model, and the machine learning model is not limited and can be used for realizing material composition analysis.
By way of example, the machine learning model may include a feature extractor for performing feature extraction, a feature fusion network for performing feature fusion, and a material component classifier for classifying the ultrasonic signals to obtain material components.
First, for a training process of a machine learning model, see fig. 4, the process may include:
step 401, obtaining a sample data set, where the sample data set includes a plurality of initial ultrasonic signals and a label corresponding to each of the initial ultrasonic signals, where the label is used to represent a real material component corresponding to the sample slurry.
For example, the initial ultrasonic signal of the surface of the sample slurry (the slurry in the training process may be referred to as a sample slurry, and the real material composition corresponding to the sample slurry is known) may be acquired through non-contact, and the sample slurry is obtained by mixing and stirring the positive solid-state battery material and the negative solid-state battery material.
For example, by emitting a pulsed laser to the surface of the sample slurry, after receiving the pulsed laser, generates an initial ultrasonic signal by the thermoelastic effect. In this way, the initial ultrasonic signal (i.e., laser ultrasound) reflected by the surface of the sample slurry can be acquired by the vibrometer in a non-contact manner.
The laser ultrasonic wave is to excite stress pulse in the detected workpiece by using laser pulse with thermoelastic effect or ablation effect, and the stress pulse can excite ultrasonic signals of different wave patterns to obtain workpiece information and defect characterization, such as workpiece thickness, internal and surface defects, material parameters, etc. through receiving the propagated ultrasonic wave in contact or non-contact mode. Based on this, a pulsed laser is emitted to the surface of the sample slurry by a pulsed laser, and an initial ultrasonic signal reflected by the surface of the sample slurry is acquired by a vibration meter in a noncontact manner.
For example, a plurality of sample slurries with different real material compositions can be obtained, and for each sample slurry, an initial ultrasonic signal of the surface of the sample slurry can be acquired in a non-contact manner, and the real material composition corresponding to the initial ultrasonic signal can be known. In summary, based on a plurality of sample slurries, a sample data set may be obtained, the sample data set including a plurality of initial ultrasonic signals and a tag corresponding to each of the initial ultrasonic signals, the tag being used to represent a true material composition corresponding to the sample slurry.
The initial ultrasonic signal may include, but is not limited to, an ultrasonic a-scan signal, although other types of ultrasonic signals are possible, as is not limited in this regard. The ultrasonic wave a-scan signal is a point scanning signal, the ultrasonic wave a-scan signal is a waveform, the abscissa represents the propagation time or propagation distance of the ultrasonic wave in the detected material, and the ordinate represents the amplitude (amplitude) of the reflected wave of the ultrasonic wave.
Step 402, for each initial ultrasonic signal in the sample data set, acquiring a target ultrasonic signal corresponding to the initial ultrasonic signal. For example, the initial ultrasonic signal may be preprocessed to obtain the target ultrasonic signal corresponding to the initial ultrasonic signal. The purpose of preprocessing the initial ultrasonic signal is to remove unnecessary signals in the ultrasonic signal and to envelope the ultrasonic signal.
Illustratively, the unwanted signals in the initial ultrasonic signal are eliminated by a band-pass filter, so as to obtain the target ultrasonic signal corresponding to the initial ultrasonic signal. Or, performing envelope processing on the initial ultrasonic signal to obtain a target ultrasonic signal corresponding to the initial ultrasonic signal. Alternatively, the unwanted signal in the initial ultrasonic signal is eliminated by a band-pass filter, and the ultrasonic signal from which the unwanted signal is eliminated is subjected to envelope processing to obtain the target ultrasonic signal corresponding to the initial ultrasonic signal.
Since the initial ultrasonic signal is a high-resolution wideband signal, that is, the initial ultrasonic signal includes a useless signal, the useless signal in the initial ultrasonic signal can be eliminated by the band-pass filter, and the useless signal elimination process of the initial ultrasonic signal is not limited.
In order to extract the low-frequency modulated portion in the initial ultrasonic signal, the initial ultrasonic signal may be subjected to envelope processing, such as hilbert transform, without limitation to the transform process.
Step 403, acquiring an acoustic characteristic (i.e. an acoustic performance characteristic) corresponding to the target ultrasonic signal.
For example, due to the difference of young's modulus, poisson ratio and density, different materials have different acoustic characteristics, so the acoustic characteristics can be used as the analysis basis of the material composition, that is, the acoustic characteristics corresponding to the target ultrasonic signal need to be acquired. For example, the acoustic features corresponding to the target ultrasonic signal may be obtained through a priori knowledge, that is, a priori functional relationship between the ultrasonic signal and the acoustic features.
Illustratively, the acoustic features may include, but are not limited to, at least one of: acoustic attenuation, sound velocity, acoustic impedance, to name just a few examples.
In one possible implementation, if the acoustic feature includes acoustic attenuation, then: based on the a priori functional relationship of the ultrasonic signal and the acoustic attenuation, a target acoustic attenuation corresponding to the target ultrasonic signal can be determined. For example, a functional relationship between an ultrasonic signal and acoustic attenuation may be previously configured, and the functional relationship is not limited, and an input of the functional relationship is an ultrasonic signal and an output of the functional relationship is acoustic attenuation.
In one possible implementation, if the acoustic feature comprises sound velocity, then: based on the prior functional relationship between the ultrasonic signal and the sound velocity, a target sound velocity corresponding to the target ultrasonic signal can be determined. For example, a functional relationship between the ultrasonic signal and the sound velocity may be previously configured, and the functional relationship is not limited, and the input of the functional relationship is the ultrasonic signal, and the output of the functional relationship is the sound velocity, so that the target sound velocity can be obtained by querying the functional relationship with the target ultrasonic signal.
In one possible implementation, if the acoustic feature comprises acoustic impedance, then: based on the a priori functional relationship of the ultrasonic signal and the acoustic impedance, a target acoustic impedance corresponding to the target ultrasonic signal can be determined. For example, a functional relationship between the ultrasonic signal and the acoustic impedance may be configured in advance, and the functional relationship is not limited, and the input of the functional relationship is the ultrasonic signal and the output of the functional relationship is the acoustic impedance, so that the target acoustic impedance may be obtained by querying the functional relationship with the target ultrasonic signal.
For example, after the target acoustic attenuation, the target sound velocity, and the target acoustic impedance are obtained, an acoustic characteristic corresponding to the target ultrasonic signal may be determined based on the target acoustic attenuation, the target sound velocity, and the target acoustic impedance, such as acoustic characteristics including the target acoustic attenuation, the target sound velocity, and the target acoustic impedance.
Step 404, obtaining signal characteristics corresponding to the target ultrasonic signal.
The machine learning model includes a feature extractor for performing feature extraction, for example, the feature extractor may be a CNN (Convolutional Neural Network ) or may be another network, and the structure of the feature extractor is not limited as long as the feature extractor can extract features. Based on this, it is possible to input the target ultrasonic signal to the feature extractor, extract the signal feature corresponding to the target ultrasonic signal by the feature extractor, and output the signal feature. Wherein the feature extractor performs operations such as convolution, and thus, can extract signal features in the frequency domain of the target ultrasonic signal.
Step 405, fusing the acoustic feature and the signal feature to obtain a fused feature.
The machine learning model includes a feature fusion network, and the feature fusion network is used to implement feature fusion, which is not limited as long as feature fusion can be implemented. Based on the above, the acoustic feature and the signal feature can be input to a feature fusion network, the feature fusion network fuses the acoustic feature and the signal feature to obtain a fused feature, and the fused feature is output.
And step 406, predicting the predicted material composition corresponding to the sample slurry based on the fused characteristics.
Illustratively, the machine learning model includes a material component classifier, and the material component classifier is configured to classify the ultrasonic signal to obtain the material component, e.g., the material component classifier may be a classifier network, and the material component classifier is not limited as long as classification of the material component can be achieved. Based on this, the fused features may be input to a material component classifier, which predicts a predicted material component corresponding to the initial ultrasonic signal based on the fused features, that is, a predicted material component corresponding to the sample slurry, and outputs the predicted material component.
Step 407, determining a loss value corresponding to the initial ultrasonic signal based on the predicted material component and the real material component corresponding to the sample slurry, and adjusting network parameters of the machine learning model (such as the feature extractor, the feature fusion network and the material component classifier) based on the loss value to obtain an adjusted model.
For example, for each initial ultrasonic signal, the sample data set includes a label corresponding to the initial ultrasonic signal, and the label is used to represent the actual material composition corresponding to the sample slurry. Based on steps 402-406, a predicted material composition corresponding to the initial ultrasonic signal (i.e., a predicted material composition corresponding to the sample slurry) may be determined. In summary, it can be seen that the predicted material component and the real material component corresponding to the initial ultrasonic signal can be obtained, and then the loss value corresponding to the initial ultrasonic signal is determined based on the predicted material component and the real material component, for example, the closer the predicted material component and the real material component are, the smaller the loss value is, and the further the predicted material component and the real material component are, the larger the loss value is.
For example, the loss value corresponding to the initial ultrasonic signal may be determined by using a multi-class cross entropy loss function, or the loss value corresponding to the initial ultrasonic signal may be determined by using another loss function, which is not limited. As shown in equation (1), for an example of determining a loss value using a cross entropy loss function:
in formula (1), p= [ p ] 0 ,…,p C-1 ]Is a probability distribution of each element p i Representing the probability that the initial ultrasound signal belongs to class i, the probability distribution may be determined based on predicted material composition, e.g., there are a total of C material compositions (i.e., the total number of classifications is C), which may include the probability p corresponding to class 0 material composition 0 (i.e. probability of the original ultrasonic signal belonging to class 0), probability p of the 1 st material component correspondence 1 (i.e., the probability that the original ultrasound signal belongs to class 1), and so on.
y=[y 0 ,…,y C-1 ]Onehot (one-time-heat-code), which is the initial ultrasonic signal, y when the initial ultrasonic signal belongs to the i-th category i When the initial ultrasonic signal does not belong to the i-th category, =1, y i =0, the one-hot encoding may be determined based on the true material composition. For example, there are a total of C material components, y if the true material component is the 0 th material component 0 Is 1, [ y ] 1 ,…,y C-1 ]All 0, if the true material composition is 1 st material composition, y 1 Is 1, y 0 Is 0, [ y ] 2 ,…,y C-1 ]All 0 and so on.
To sum up, the probability distribution p= [ p ] is determined based on the predicted material composition 0 ,…,p C-1 ]Determination of the independent thermal code y= [ y ] based on the real material composition 0 ,…,y C-1 ]The cross entropy loss function shown in formula (1) determines a loss value corresponding to the initial ultrasonic signal, that is, a loss value corresponding to each initial ultrasonic signal can be obtained.
The target loss value, for example, the sum of the loss values corresponding to all the initial ultrasonic signals, or the average value of the loss values corresponding to all the initial ultrasonic signals, may be determined as the target loss value based on the loss value corresponding to each of the initial ultrasonic signals, which is not limited.
For example, after obtaining the target loss value, network parameters of the machine learning model may be adjusted based on the target loss value to obtain an adjusted model. For example, the network parameters of the feature extractor, the network parameters of the feature fusion network and the network parameters of the material component classifier can be adjusted by adopting a gradient descent method and the like, the adjustment process is not limited, and the adjustment target is that the target loss value is smaller and smaller.
Step 408, determining whether the adjusted model has converged. If yes, the adjusted model is used as a trained machine learning model, the machine learning model is output, and the battery quality on-line detection process is realized based on the machine learning model. If not, the adjusted model is used as the machine learning model to be trained, the step 402-step 407 is returned, and the network parameters of the machine learning model are adjusted again to obtain the adjusted model.
For example, if the target loss value is smaller than a preset threshold (which may be empirically configured), it is determined that the adjusted model has converged, and if the target loss value is not smaller than the preset threshold, it is determined that the adjusted model has not converged.
For another example, if the number of iterations of the machine learning model reaches the number of iterations threshold, it is determined that the adjusted model has converged, and if the number of iterations of the machine learning model does not reach the number of iterations threshold, it is determined that the adjusted model has not converged.
For another example, if the iteration duration of the machine learning model reaches the duration threshold, the adjusted model is determined to have converged, and if the iteration duration of the machine learning model does not reach the duration threshold, the adjusted model is determined to have not converged.
Thus, the training process of the machine learning model is completed, a trained machine learning model is obtained, and the machine learning model can comprise a trained feature extractor, a feature fusion network and a material component classifier.
Second, for the battery quality online detection process, see fig. 5, the process may include:
step 501, acquiring an initial ultrasonic signal of the surface of the slurry to be detected through non-contact; the slurry to be detected is obtained by mixing and stirring a positive electrode solid-state battery material and a negative electrode solid-state battery material.
For example, the initial ultrasonic signal of the surface of the slurry to be detected (the slurry in the online detection process of the battery quality may be referred to as the slurry to be detected, and the real material component corresponding to the slurry to be detected is unknown and needs to be predicted), where the slurry to be detected may be obtained by mixing and stirring the positive solid-state battery material and the negative solid-state battery material.
For example, a pulse laser is used for emitting pulse laser to the surface of the slurry to be detected, and after the slurry to be detected receives the pulse laser, an initial ultrasonic signal is generated through a thermoelastic effect. In this way, the initial ultrasonic signal reflected by the surface of the slurry to be detected can be acquired in a non-contact manner by the vibration meter.
The initial ultrasonic signal may include, but is not limited to, an ultrasonic a-scan signal, although other types of ultrasonic signals are possible, as is not limited in this regard. The ultrasonic wave a-scan signal is a point scanning signal, the ultrasonic wave a-scan signal is a waveform, the abscissa represents the propagation time or propagation distance of the ultrasonic wave in the detected material, and the ordinate represents the amplitude (amplitude) of the reflected wave of the ultrasonic wave.
Step 502, obtaining a target ultrasonic signal corresponding to the initial ultrasonic signal.
For example, the initial ultrasonic signal may be preprocessed to obtain the target ultrasonic signal corresponding to the initial ultrasonic signal. The purpose of preprocessing the initial ultrasonic signal is to remove useless signals in the initial ultrasonic signal and perform envelope processing on the initial ultrasonic signal.
Illustratively, the unwanted signals in the initial ultrasonic signal are eliminated by a band-pass filter, so as to obtain the target ultrasonic signal corresponding to the initial ultrasonic signal. Or, performing envelope processing on the initial ultrasonic signal to obtain a target ultrasonic signal corresponding to the initial ultrasonic signal. Alternatively, the unwanted signal in the initial ultrasonic signal is eliminated by a band-pass filter, and the ultrasonic signal from which the unwanted signal is eliminated is subjected to envelope processing to obtain the target ultrasonic signal corresponding to the initial ultrasonic signal.
Since the initial ultrasonic signal is a high-resolution wideband signal, that is, the initial ultrasonic signal includes a useless signal, the useless signal in the initial ultrasonic signal can be eliminated by the band-pass filter. In order to extract the low-frequency modulated portion in the initial ultrasonic signal, the initial ultrasonic signal may be subjected to envelope processing, such as hilbert transform.
Step 503, acquiring an acoustic characteristic (i.e. an acoustic performance characteristic) corresponding to the target ultrasonic signal.
Illustratively, the acoustic features corresponding to the target ultrasonic signal may be obtained through a priori knowledge, i.e., a priori functional relationship of the ultrasonic signal and the acoustic features. Wherein the acoustic features may include, but are not limited to, at least one of: acoustic attenuation, sound velocity, acoustic impedance, although of course, the above are just a few examples.
Illustratively, if the acoustic feature comprises acoustic attenuation, then: based on the a priori functional relationship of the ultrasonic signal and the acoustic attenuation, a target acoustic attenuation corresponding to the target ultrasonic signal can be determined. For example, a functional relationship between the ultrasonic signal and the acoustic attenuation may be previously configured, and the target acoustic attenuation may be obtained by inquiring the functional relationship with the target ultrasonic signal. If the acoustic feature comprises sound velocity, then: based on the prior functional relationship between the ultrasonic signal and the sound velocity, a target sound velocity corresponding to the target ultrasonic signal can be determined. For example, a functional relationship between the ultrasonic signal and the sound velocity may be previously configured, and the target sound velocity may be obtained by querying the functional relationship with the target ultrasonic signal. If the acoustic feature comprises acoustic impedance, then: based on the a priori functional relationship of the ultrasonic signal and the acoustic impedance, a target acoustic impedance corresponding to the target ultrasonic signal can be determined. For example, a functional relationship between the ultrasonic signal and the acoustic impedance may be preconfigured, and the target acoustic impedance may be obtained by querying the functional relationship through the target ultrasonic signal.
For example, after the target acoustic attenuation, the target sound velocity, and the target acoustic impedance are obtained, an acoustic characteristic corresponding to the target ultrasonic signal may be determined based on the target acoustic attenuation, the target sound velocity, and the target acoustic impedance, such as acoustic characteristics including the target acoustic attenuation, the target sound velocity, and the target acoustic impedance.
Step 504, obtaining signal characteristics corresponding to the target ultrasonic signal.
For example, the trained machine learning model may include a trained feature extractor for performing feature extraction. Based on this, the target ultrasonic signal is input to the trained feature extractor, the signal features corresponding to the target ultrasonic signal are extracted by the feature extractor, and the signal features are output.
Step 505, fusing the acoustic feature and the signal feature to obtain a fused feature.
Illustratively, the trained machine learning model includes a trained feature fusion network, and the feature fusion network is used to implement feature fusion. Based on the above, the acoustic feature and the signal feature may be input to a feature fusion network, and after the acoustic feature and the signal feature are obtained by the feature fusion network, the acoustic feature and the signal feature are fused to obtain a fused feature, and the fused feature is output.
And step 506, predicting the material composition corresponding to the slurry to be detected based on the fused characteristics.
Illustratively, the trained machine learning model includes a trained material component classifier, and the material component classifier is configured to classify the ultrasonic signal to obtain the material component. Based on the above, the fused features may be input to a material component classifier, and the material component classifier predicts a material component corresponding to the initial ultrasonic signal based on the fused features, and outputs the material component corresponding to the slurry to be detected.
And 507, detecting the quality of the battery corresponding to the slurry to be detected based on the material composition.
For example, the desired material composition, which is the material composition of the best battery quality or the preferred battery quality, may be preconfigured. Based on this, if the material composition corresponding to the slurry to be detected meets the expected material composition (e.g., the material composition is the same as or close to the expected material composition), it is determined that the battery quality meets the discharging condition, and the slurry to be detected is allowed to complete discharging, that is, the slurry stirring process is completed, and the battery manufacturing is completed based on the slurry after stirring, and the battery quality is higher after the slurry in this case is manufactured into a battery.
If the material composition corresponding to the slurry to be detected does not meet the expected material composition (if the material composition is greatly different from the expected material composition), the battery quality is determined to not meet the discharging condition. And when the quality of the battery does not meet the discharging condition, discharging the slurry to be detected is not allowed to be completed, stirring the slurry to be detected, and repeating the process until the material components corresponding to the slurry to be detected meet the expected material components. Or discarding the slurry to be detected when the battery quality does not meet the discharging condition, namely manufacturing the battery based on the slurry to be detected. Obviously, through the treatment, the slurry to be detected which accords with the expected material composition can be used for manufacturing the battery, and the slurry to be detected which does not accord with the expected material composition can not be used for manufacturing the battery, so that the quality of the battery is remarkably improved.
The training process and the battery quality online detection process aiming at the machine learning model both relate to a feature fusion network, and the feature fusion network is used for fusing acoustic features and signal features to obtain fused features. In one possible implementation, a schematic diagram of the feature fusion network may be shown in fig. 6A, which is, of course, merely an example of a feature fusion network, and the structure of the feature fusion network is not limited.
For example, the acoustic feature may include a plurality of acoustic sub-features, n being illustrated in fig. 6A as n acoustic sub-features, where n may be a positive integer. For example, the n acoustic sub-features may be 3 acoustic sub-features, and the 3 acoustic sub-features may include a target acoustic attenuation, a target sound velocity, a target acoustic impedance, and the like.
And aiming at each acoustic sub-feature, carrying out matrix multiplication on the acoustic sub-feature and the signal feature to obtain a feature after matrix multiplication. After obtaining the features after matrix multiplication, the features after matrix multiplication can be input to a self-attention network, and the self-attention network carries out self-attention processing on the features after matrix multiplication to obtain an attention weight matrix corresponding to the acoustic sub-features. The Self-Attention network is a network implemented by adopting a Self-Attention (Self-Attention) mechanism, such as a neural network, and the Self-Attention network is used for extracting an Attention weight matrix of input features, so that after the features multiplied by the matrix are input to the Self-Attention network, the Self-Attention network can extract the Attention weight matrix and output the Attention weight matrix.
After the attention weight matrix corresponding to the acoustic sub-feature is obtained, the attention weight matrix is multiplied by the target ultrasonic signal to obtain the fusion vector corresponding to the acoustic sub-feature.
Referring to fig. 6A, the acoustic sub-feature 1 and the signal feature are subjected to matrix multiplication, the features after matrix multiplication are input to a self-attention network, and the self-attention network performs self-attention processing on the features after matrix multiplication to obtain an attention weight matrix 1 corresponding to the acoustic sub-feature 1. And carrying out matrix multiplication on the attention weight matrix 1 and the target ultrasonic signal to obtain a fusion vector 1 corresponding to the acoustic sub-feature 1.
Similarly, the acoustic sub-feature n and the signal feature can be subjected to matrix multiplication, the feature after matrix multiplication is input to a self-attention network, and the self-attention network carries out self-attention processing on the feature after matrix multiplication to obtain an attention weight matrix n corresponding to the acoustic sub-feature n. And carrying out matrix multiplication on the attention weight matrix n and the target ultrasonic signal to obtain a fusion vector n corresponding to the acoustic sub-feature n.
In summary, a fusion vector corresponding to each acoustic sub-feature, such as fusion vector 1, fusion vector 2, and fusion vector n, may be obtained, and after obtaining the fusion vector corresponding to each acoustic sub-feature, feature fusion may be performed on the fusion vector corresponding to each acoustic sub-feature, so as to obtain a feature after fusion.
In one possible implementation, the fusion vector corresponding to each acoustic sub-feature may be weighted to obtain a fused feature. For example, the weighting coefficients corresponding to each acoustic sub-feature are obtained, and the weighting coefficients corresponding to different acoustic sub-features may be the same or different. Based on the fusion vectors and the weighting coefficients corresponding to the acoustic sub-features, the fusion vectors corresponding to each acoustic sub-feature can be weighted to obtain the fused features. For example, the following formula is used to determine post-fusion features: w1+w2.+ w2.+ -. Xn, W1 represents a weighting coefficient corresponding to acoustic sub-feature 1, X1 represents a fusion vector 1 corresponding to acoustic sub-feature 1, W2 represents a weighting coefficient corresponding to acoustic sub-feature 2, X2 represents a fusion vector 2 corresponding to acoustic sub-feature 2, wn represents a weighting coefficient corresponding to acoustic sub-feature n, xn represents a fusion vector n corresponding to acoustic sub-feature n.
In another possible implementation manner, the fusion vector corresponding to each acoustic sub-feature can be subjected to secondary feature fusion, so that the acoustic feature and the signal feature are further fused. A schematic diagram of the secondary feature fusion network may be shown in fig. 6B, and of course, this is merely an example, and the structure is not limited thereto.
For example, after n fusion vectors corresponding to n acoustic sub-features are obtained, the n fusion vectors are denoted as fusion vector 1, fusion vector 2, fusion vector n, and the fusion vectors corresponding to each acoustic sub-feature are arranged and combined to obtain a plurality of vector combinations, where each vector combination includes two fusion vectors. For example, fusion vector 1 and fusion vector 2 form a vector combination, fusion vector 1 and fusion vector 3 form a vector combination,...
For each vector combination, performing matrix multiplication on two fusion vectors in the vector combination to obtain a feature after matrix multiplication. After the matrix multiplied features are obtained, the matrix multiplied features can be input to a self-attention network, and the self-attention network performs self-attention processing on the matrix multiplied features to obtain an attention weight matrix corresponding to the vector combination. Wherein the self-attention network is used to extract an attention weight matrix of the input features, so that after the features multiplied by the matrix are input to the self-attention network, the self-attention network can extract the attention weight matrix and output the attention weight matrix.
After the attention weight matrix corresponding to the vector combination is obtained, the attention weight matrix is multiplied by the target ultrasonic signal to obtain a fusion vector corresponding to the vector combination.
Referring to fig. 6B, the fusion vector 1 and the fusion vector 2 may be subjected to matrix multiplication, the features after matrix multiplication are input to a self-attention network, and the self-attention network performs self-attention processing on the features after matrix multiplication to obtain an attention weight matrix 1_2 (i.e., an attention weight matrix between the fusion vector 1 and the fusion vector 2). The attention weight matrix 1_2 and the target ultrasound signal may be matrix multiplied to obtain a fusion vector 1_2 (i.e., a fusion vector between fusion vector 1 and fusion vector 2).
Similarly, the fusion vector n-1 and the fusion vector n can be subjected to matrix multiplication, the characteristics after matrix multiplication are input to a self-attention network, and the self-attention network carries out self-attention processing on the characteristics after matrix multiplication to obtain an attention weight matrix n-1_n. After the attention weight matrix n-1_n is obtained, the attention weight matrix n-1_n and the target ultrasonic signal can be subjected to matrix multiplication to obtain a fusion vector n-1_n.
In summary, the fusion vector corresponding to each vector combination, such as the fusion vector 1_2, the fusion vector 1_3, the fusion vector n-1—n, can be obtained, and after the fusion vector corresponding to each vector combination is obtained, the fusion vector corresponding to each vector combination can be weighted, so as to obtain the feature after fusion. For example, the weighting coefficients corresponding to each vector combination are obtained, and the weighting coefficients corresponding to different vector combinations may be the same or different. Based on the fusion vectors and the weighting coefficients corresponding to the vector combinations, the fusion vectors corresponding to each vector combination can be weighted to obtain fused features, and the weighting process of the fusion vectors is not limited.
Obviously, through secondary feature fusion and primary feature fusion, the acoustic features and the signal features can be completely fused, so that the fused features can be obtained only by carrying out weighted summation operation.
According to the technical scheme, in the embodiment of the application, the ultrasonic signals on the surface of the slurry to be detected can be acquired in a non-contact mode, the material components corresponding to the slurry to be detected are predicted based on the ultrasonic signals, and the battery quality corresponding to the slurry to be detected is detected based on the material components, so that a reliable detection mode of the battery quality is provided, and the battery quality corresponding to the slurry to be detected can be accurately detected. The method has the advantages that the laser ultrasonic is adopted to collect real-time signals in the slurry stirring process, the machine learning algorithm is adopted to analyze and judge the material components in real time, so that the quality of the slurry stirring process is detected on line, ultrasonic signals can be well collected under the condition that the slurry stirring process cannot be contacted, the characteristics of different material components are extracted based on the ultrasonic signals, the battery quality on line detection is realized in the slurry stirring process, and the battery quality can be remarkably improved. The method can detect the quality of the battery in real time in the manufacturing process of the battery, can be realized by only using ultrasonic signals of laser ultrasound, and can detect the composition change in the battery to realize online real-time quality online detection.
Based on the same application concept as the above method, an embodiment of the present application provides an online battery quality detection device, as shown in fig. 7, which is a schematic structural diagram of the device, where the device may include:
the collecting module 71 is used for collecting an initial ultrasonic signal of the surface of the slurry to be detected in a non-contact manner; the slurry to be detected is obtained by mixing and stirring a positive electrode solid-state battery material and a negative electrode solid-state battery material;
an acquisition module 72, configured to acquire a target ultrasonic signal corresponding to the initial ultrasonic signal;
the processing module 73 is configured to obtain an acoustic feature corresponding to the target ultrasonic signal and a signal feature corresponding to the target ultrasonic signal, and fuse the acoustic feature and the signal feature to obtain a fused feature; predicting material components corresponding to the slurry to be detected based on the fused features;
and a detection module 74, configured to detect a battery quality corresponding to the slurry to be detected based on the material composition.
The acquisition module 71 is specifically configured to acquire, by non-contact, an initial ultrasonic signal of the surface of the slurry to be detected: transmitting pulse laser to the surface of the slurry to be detected through a pulse laser, so that the slurry to be detected generates an initial ultrasonic signal through a thermoelastic effect after receiving the pulse laser; non-contact collecting an initial ultrasonic signal reflected by the surface of the slurry to be detected through a vibration meter; wherein the initial ultrasonic signal comprises an ultrasonic a-scan signal.
For example, the acquiring module 72 is specifically configured to, when acquiring the target ultrasonic signal corresponding to the initial ultrasonic signal: the useless signals in the initial ultrasonic signals are eliminated through a band-pass filter; and carrying out envelope processing on the ultrasonic signal with the useless signal eliminated to obtain the target ultrasonic signal.
Illustratively, the processing module 73 is specifically configured to, when acquiring the acoustic characteristic corresponding to the target ultrasonic signal and the signal characteristic corresponding to the target ultrasonic signal: determining a target sound attenuation corresponding to the target ultrasonic signal based on a priori functional relation between the ultrasonic signal and the sound attenuation; determining a target sound velocity corresponding to the target ultrasonic signal based on a priori functional relation between the ultrasonic signal and the sound velocity; determining a target acoustic impedance corresponding to the target ultrasonic signal based on a priori functional relation between the ultrasonic signal and the acoustic impedance; determining an acoustic feature corresponding to the target ultrasonic signal based on the target acoustic attenuation, the target sound velocity, and the target acoustic impedance; inputting the target ultrasonic signal to a trained feature extractor, extracting signal features corresponding to the target ultrasonic signal by the feature extractor, and outputting the signal features.
Illustratively, the processing module 73 is specifically configured to, when fusing the acoustic feature and the signal feature to obtain a fused feature: if the acoustic features comprise a plurality of acoustic sub-features, aiming at each acoustic sub-feature, carrying out matrix multiplication on the acoustic sub-feature and the signal feature, and carrying out self-attention processing on the features after matrix multiplication to obtain an attention weight matrix corresponding to the acoustic sub-feature; performing matrix multiplication on the attention weight matrix and the target ultrasonic signal to obtain a fusion vector corresponding to the acoustic sub-feature; and carrying out feature fusion on the fusion vector corresponding to each acoustic sub-feature to obtain a fused feature.
For example, the processing module 73 performs feature fusion on the fusion vector corresponding to each acoustic sub-feature, and is specifically configured to: the fusion vectors corresponding to the acoustic sub-features are arranged and combined to obtain a plurality of vector combinations, wherein each vector combination comprises two fusion vectors; for each vector combination, performing matrix multiplication on two fusion vectors in the vector combination, and performing self-attention processing on the characteristics after matrix multiplication to obtain an attention weight matrix corresponding to the vector combination; performing matrix multiplication on the attention weight matrix and the target ultrasonic signal to obtain a fusion vector corresponding to the vector combination; and weighting the fusion vector corresponding to each vector combination to obtain the fused characteristic.
Illustratively, the processing module 73 is specifically configured to, when predicting the material component corresponding to the slurry to be detected based on the post-fusion feature: inputting the fused features into a trained material component classifier, and predicting and outputting material components corresponding to the slurry to be detected by the material component classifier based on the fused features; the detection module 74 is specifically configured to, when detecting the battery quality corresponding to the to-be-detected slurry based on the material composition: if the material composition meets the expected material composition, determining that the battery quality meets a discharging condition; and if the material composition does not meet the expected material composition, determining that the battery quality does not meet the discharging condition.
Based on the same application concept as the above method, an electronic device is provided in an embodiment of the present application, and as shown in fig. 8, the electronic device includes: a processor 81 and a machine-readable storage medium 82, the machine-readable storage medium 82 storing machine-executable instructions executable by the processor 81; the processor 81 is configured to execute machine executable instructions to implement the battery quality online detection method disclosed in the above examples of the present application.
Based on the same application concept as the above method, the embodiment of the application further provides a machine-readable storage medium, where a plurality of computer instructions are stored on the machine-readable storage medium, and when the computer instructions are executed by a processor, the method for online detecting battery quality disclosed in the above example of the application can be implemented.
Wherein the machine-readable storage medium may be any electronic, magnetic, optical, or other physical storage device that can contain or store information, such as executable instructions, data, or the like. For example, a machine-readable storage medium may be: RAM (Radom Access Memory, random access memory), volatile memory, non-volatile memory, flash memory, a storage drive (e.g., hard drive), a solid state drive, any type of storage disk (e.g., optical disk, dvd, etc.), or a similar storage medium, or a combination thereof.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer or an entity, or by an article of manufacture having a certain function. A typical implementation device is a computer, which may be in the form of a personal computer, laptop computer, cellular telephone, camera phone, smart phone, personal digital assistant, media player, navigation device, email device, game console, tablet computer, wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Moreover, these computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (10)

1. An on-line detection method for battery quality, comprising the steps of:
acquiring an initial ultrasonic signal of the surface of the slurry to be detected through non-contact; the slurry to be detected is obtained by mixing and stirring a positive electrode solid-state battery material and a negative electrode solid-state battery material;
acquiring a target ultrasonic signal corresponding to the initial ultrasonic signal;
acquiring acoustic features corresponding to the target ultrasonic signals and signal features corresponding to the target ultrasonic signals, and fusing the acoustic features and the signal features to obtain fused features;
predicting material components corresponding to the slurry to be detected based on the fused features;
and detecting the battery quality corresponding to the slurry to be detected based on the material composition.
2. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the non-contact collection of the initial ultrasonic signals of the surface of the slurry to be detected comprises the following steps:
transmitting pulse laser to the surface of the slurry to be detected through a pulse laser, so that the slurry to be detected generates an initial ultrasonic signal through a thermoelastic effect after receiving the pulse laser;
Non-contact collecting an initial ultrasonic signal reflected by the surface of the slurry to be detected through a vibration meter;
wherein the initial ultrasonic signal comprises an ultrasonic a-scan signal.
3. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the obtaining the target ultrasonic signal corresponding to the initial ultrasonic signal includes:
the useless signals in the initial ultrasonic signals are eliminated through a band-pass filter;
and carrying out envelope processing on the ultrasonic signal with the useless signal eliminated to obtain the target ultrasonic signal.
4. The method of claim 1, wherein the acquiring the acoustic characteristic corresponding to the target ultrasonic signal and the signal characteristic corresponding to the target ultrasonic signal comprises:
determining a target sound attenuation corresponding to the target ultrasonic signal based on a priori functional relation between the ultrasonic signal and the sound attenuation; determining a target sound velocity corresponding to the target ultrasonic signal based on a priori functional relation between the ultrasonic signal and the sound velocity; determining a target acoustic impedance corresponding to the target ultrasonic signal based on a priori functional relation between the ultrasonic signal and the acoustic impedance; determining an acoustic feature corresponding to the target ultrasonic signal based on the target acoustic attenuation, the target sound velocity, and the target acoustic impedance;
Inputting the target ultrasonic signal to a trained feature extractor, extracting signal features corresponding to the target ultrasonic signal by the feature extractor, and outputting the signal features.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the fusing of the acoustic features and the signal features to obtain fused features includes:
if the acoustic features comprise a plurality of acoustic sub-features, for each acoustic sub-feature, performing matrix multiplication on the acoustic sub-feature and the signal feature, and performing self-attention processing on the features after matrix multiplication to obtain an attention weight matrix corresponding to the acoustic sub-feature; performing matrix multiplication on the attention weight matrix and the target ultrasonic signal to obtain a fusion vector corresponding to the acoustic sub-feature;
and carrying out feature fusion on fusion vectors corresponding to each acoustic sub-feature to obtain the fused features.
6. The method of claim 5, wherein the feature fusion is performed on the fusion vector corresponding to each acoustic sub-feature to obtain the fused feature, and the method comprises:
the fusion vectors corresponding to the acoustic sub-features are arranged and combined to obtain a plurality of vector combinations, wherein each vector combination comprises two fusion vectors; for each vector combination, performing matrix multiplication on two fusion vectors in the vector combination, and performing self-attention processing on the characteristics after matrix multiplication to obtain an attention weight matrix corresponding to the vector combination; performing matrix multiplication on the attention weight matrix and the target ultrasonic signal to obtain a fusion vector corresponding to the vector combination;
And weighting the fusion vector corresponding to each vector combination to obtain the fused characteristic.
7. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the predicting the material components corresponding to the slurry to be detected based on the fused features comprises the following steps:
inputting the fused features into a trained material component classifier, and predicting and outputting material components corresponding to the slurry to be detected by the material component classifier based on the fused features;
the detecting the battery quality corresponding to the slurry to be detected based on the material composition comprises the following steps:
if the material composition meets the expected material composition, determining that the battery quality meets a discharging condition; and if the material composition does not meet the expected material composition, determining that the battery quality does not meet the discharging condition.
8. An on-line battery quality detection device, comprising:
the acquisition module is used for acquiring an initial ultrasonic signal of the surface of the slurry to be detected through non-contact; the slurry to be detected is obtained by mixing and stirring a positive electrode solid-state battery material and a negative electrode solid-state battery material;
the acquisition module is used for acquiring a target ultrasonic signal corresponding to the initial ultrasonic signal;
The processing module is used for acquiring acoustic features corresponding to the target ultrasonic signals and signal features corresponding to the target ultrasonic signals, and fusing the acoustic features and the signal features to obtain fused features; predicting material components corresponding to the slurry to be detected based on the fused features;
and the detection module is used for detecting the battery quality corresponding to the slurry to be detected based on the material composition.
9. The apparatus of claim 8, wherein the device comprises a plurality of sensors,
the acquisition module is specifically used for acquiring an initial ultrasonic signal of the surface of the slurry to be detected in a non-contact mode: transmitting pulse laser to the surface of the slurry to be detected through a pulse laser, so that the slurry to be detected generates an initial ultrasonic signal through a thermoelastic effect after receiving the pulse laser; non-contact collecting an initial ultrasonic signal reflected by the surface of the slurry to be detected through a vibration meter;
wherein the initial ultrasonic signal comprises an ultrasonic a-scan signal;
the acquiring module is specifically configured to, when acquiring the target ultrasonic signal corresponding to the initial ultrasonic signal: the useless signals in the initial ultrasonic signals are eliminated through a band-pass filter; envelope processing is carried out on the ultrasonic signal after the elimination of the useless signal to obtain the target ultrasonic signal;
The processing module is specifically configured to, when acquiring the acoustic feature corresponding to the target ultrasonic signal and the signal feature corresponding to the target ultrasonic signal: determining a target sound attenuation corresponding to the target ultrasonic signal based on a priori functional relation between the ultrasonic signal and the sound attenuation; determining a target sound velocity corresponding to the target ultrasonic signal based on a priori functional relation between the ultrasonic signal and the sound velocity; determining a target acoustic impedance corresponding to the target ultrasonic signal based on a priori functional relation between the ultrasonic signal and the acoustic impedance; determining an acoustic feature corresponding to the target ultrasonic signal based on the target acoustic attenuation, the target sound velocity, and the target acoustic impedance; inputting the target ultrasonic signal to a trained feature extractor, extracting signal features corresponding to the target ultrasonic signal by the feature extractor, and outputting the signal features;
the processing module is specifically configured to, when fusing the acoustic feature and the signal feature to obtain a fused feature: if the acoustic features comprise a plurality of acoustic sub-features, for each acoustic sub-feature, performing matrix multiplication on the acoustic sub-feature and the signal feature, and performing self-attention processing on the features after matrix multiplication to obtain an attention weight matrix corresponding to the acoustic sub-feature; performing matrix multiplication on the attention weight matrix and the target ultrasonic signal to obtain a fusion vector corresponding to the acoustic sub-feature; feature fusion is carried out on fusion vectors corresponding to each acoustic sub-feature, and the fused features are obtained;
The processing module performs feature fusion on fusion vectors corresponding to each acoustic sub-feature, and is specifically used for obtaining the fused features: the fusion vectors corresponding to the acoustic sub-features are arranged and combined to obtain a plurality of vector combinations, wherein each vector combination comprises two fusion vectors; for each vector combination, performing matrix multiplication on two fusion vectors in the vector combination, and performing self-attention processing on the characteristics after matrix multiplication to obtain an attention weight matrix corresponding to the vector combination; performing matrix multiplication on the attention weight matrix and the target ultrasonic signal to obtain a fusion vector corresponding to the vector combination; weighting the fusion vector corresponding to each vector combination to obtain the fused characteristic;
the processing module predicts the material component corresponding to the slurry to be detected based on the fused characteristics, and is specifically configured to: inputting the fused features into a trained material component classifier, and predicting and outputting material components corresponding to the slurry to be detected by the material component classifier based on the fused features; the detection module is specifically used for detecting the battery quality corresponding to the slurry to be detected based on the material composition: if the material composition meets the expected material composition, determining that the battery quality meets a discharging condition; and if the material composition does not meet the expected material composition, determining that the battery quality does not meet the discharging condition.
10. An electronic device, comprising: a processor and a machine-readable storage medium storing machine-executable instructions executable by the processor; the processor is configured to execute machine executable instructions to implement the method of any of claims 1-7.
CN202311189444.XA 2023-09-14 2023-09-14 Battery quality on-line detection method, device and equipment Pending CN117288830A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311189444.XA CN117288830A (en) 2023-09-14 2023-09-14 Battery quality on-line detection method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311189444.XA CN117288830A (en) 2023-09-14 2023-09-14 Battery quality on-line detection method, device and equipment

Publications (1)

Publication Number Publication Date
CN117288830A true CN117288830A (en) 2023-12-26

Family

ID=89252824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311189444.XA Pending CN117288830A (en) 2023-09-14 2023-09-14 Battery quality on-line detection method, device and equipment

Country Status (1)

Country Link
CN (1) CN117288830A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118362635A (en) * 2024-06-14 2024-07-19 无锡领声科技有限公司 Slurry quality real-time detection and analysis method based on ultrasonic image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118362635A (en) * 2024-06-14 2024-07-19 无锡领声科技有限公司 Slurry quality real-time detection and analysis method based on ultrasonic image

Similar Documents

Publication Publication Date Title
Xu et al. Ultrasonic echo waveshape features extraction based on QPSO-matching pursuit for online wear debris discrimination
CN117288830A (en) Battery quality on-line detection method, device and equipment
DeVries et al. Instance selection for gans
CN111598170B (en) Crack detection probability evaluation method considering model selection uncertainty
Bai et al. Characterization of defects using ultrasonic arrays: a dynamic classifier approach
Sawant et al. Unsupervised learning framework for temperature compensated damage identification and localization in ultrasonic guided wave SHM with transfer learning
CN110702792B (en) Alloy tissue ultrasonic detection classification method based on deep learning
CN115629127B (en) Container defect analysis method, device and equipment and readable storage medium
Banbrook et al. How to extract Lyapunov exponents from short and noisy time series
Patel et al. Investigation of uncertainty of deep learning-based object classification on radar spectra
CN113887454A (en) Non-contact laser ultrasonic detection method based on convolutional neural network point source identification
Wang Wavelet Transform Based Feature Extraction for Ultrasonic Flaw Signal Classification.
Kim et al. Damage classification using Adaboost machine learning for structural health monitoring
CN117457017A (en) Voice data cleaning method and electronic equipment
Olofsson et al. Maximum a posteriori deconvolution of sparse ultrasonic signals using genetic optimization
Aldrin et al. Uncertainty quantification of resonant ultrasound spectroscopy for material property and single crystal orientation estimation on a complex part
Wilcox et al. Exploiting the Full Data Set from Ultrasonic Arrays by Post‐Processing
EP2118674A1 (en) Automatic procedure for merging tracks and estimating harmonic combs
CN115047448A (en) Indoor target rapid detection method and system based on acoustic-electromagnetic intermodulation
Thati et al. Identification of ultra high frequency acoustic coda waves using deep neural networks
Chiou et al. Ultrasonic flaw detection using neural network models and statistical analysis: Simulation studies
Barzegar et al. Classification Functions and Optimization Algorithms for Debonding Detection in Adhesively Bonded Lap-joints through Ultrasonic Guided Waves
Nabil et al. Ultrasonic signals parameters estimation based on differential evolution
Ou et al. Underwater ordnance classification using time-frequency signatures of backscattering signals
Karthikeyan et al. A heuristic complex probabilistic neural network system for partial discharge pattern classification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination