CN115687330A - Automobile fault code analysis method, storage medium and electronic equipment - Google Patents

Automobile fault code analysis method, storage medium and electronic equipment Download PDF

Info

Publication number
CN115687330A
CN115687330A CN202211429024.XA CN202211429024A CN115687330A CN 115687330 A CN115687330 A CN 115687330A CN 202211429024 A CN202211429024 A CN 202211429024A CN 115687330 A CN115687330 A CN 115687330A
Authority
CN
China
Prior art keywords
model
automobile
data
training
fault code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211429024.XA
Other languages
Chinese (zh)
Inventor
司徒俊豪
张江波
孙涛
蔡鸿平
吴浩驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Mingrui Data Technology Co ltd
Original Assignee
Shenzhen Mingrui Data Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Mingrui Data Technology Co ltd filed Critical Shenzhen Mingrui Data Technology Co ltd
Priority to CN202211429024.XA priority Critical patent/CN115687330A/en
Publication of CN115687330A publication Critical patent/CN115687330A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Vehicle Cleaning, Maintenance, Repair, Refitting, And Outriggers (AREA)

Abstract

The invention relates to an automobile fault code analysis method, a storage medium and electronic equipment, which are used for acquiring data of popular records of fault codes, cleaning the data and then making the data into a standardized label training set; performing model training, adopting BERT as a training model, adopting a cross entropy function or a focus loss function as a loss function, adopting an Adam optimizer to calculate and update network parameters, and performing distributed training through a Pythrch frame; the trained model is used for analyzing the automobile fault code. According to the scheme, by means of a strong attention mechanism of a BERT model, characteristic information points of automobile fault codes are learned, decoding is carried out according to processed labels, automobile parts or subsystems which possibly have faults are recommended to a user, and an inspection range is specified. Under the condition that the automobile second-hand trader lacks professional knowledge, the popular meaning of the fault code can be obtained, the position of a device to be checked can be further quickly known, and the maintenance cost and the frame can be conveniently and quickly evaluated.

Description

Automobile fault code analysis method, storage medium and electronic equipment
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an automobile fault code analysis method, a storage medium and electronic equipment.
Background
The market of second-hand vehicles in China has been developed from the 80 s in the 20 th century to the present, and has been 40 years old. Compared with new vehicles, the main key components of the used-vehicle dealers are detection and recovery of used vehicles and re-sale, wherein the detection of vehicles is very critical, and if errors or problems exist, the used vehicles can cause large loss. For the dealer, the evaluation of the used vehicle is concerned, namely whether the vehicle has a major accident or what important spare parts are replaced, once the evaluation is detected, the dealer is in a status of not losing the bargaining, the vehicle can be recycled at a relatively low cost, and a large amount of cost is saved.
For the vehicle service person to consult either the standard code (SAE/ISO) or the vehicle manufacturer controlled code, the fault code contains five characters, the first being a letter and the four following being a number. The maintenance personnel can know what automobile spare parts need to be checked or replaced according to the fault description and the possible cause of the fault on the standard and the fault processing method.
However, for non-vehicle maintenance personnel, due to lack of professional knowledge, after seeing the fault code, the non-vehicle maintenance personnel do not know where the fault occurs, what parts are replaced, or corresponding vehicle manufacturer control codes are not available, even if the non-vehicle maintenance personnel have the capability of analyzing the fault code, the non-vehicle maintenance personnel cannot accurately locate the vehicle parts to be inspected, and the fault codes are analyzed, and the non-vehicle maintenance personnel are not very clearly directed to the fault part, and cannot locate the vehicle parts to be inspected.
Disclosure of Invention
Therefore, it is necessary to provide a storage medium and an electronic device for an automobile fault code analysis method, which are used for solving the problems that the current second-hand traffic traders lack professional knowledge and cannot automatically judge the fault part or the maintenance cost directly through the automobile fault code.
On one hand, the invention provides an automobile fault code analysis method, which comprises the following steps:
s10: acquiring data of popular records of fault codes, and cleaning the data to make a standardized label training set;
s20: importing a label training set for model training, wherein: adopting BERT as a training model, adopting a cross entropy function or a focus loss function as a loss function, adopting an Adam optimizer to calculate and update network parameters, and carrying out distributed training through a Pythrch frame;
s30: and inputting the automobile fault code into the trained model, and analyzing to obtain the automobile parts needing to be overhauled.
Further, step S10 includes:
s11: acquiring popular record data of fault codes;
s12: cleaning data, and removing abnormal values in popular records of fault codes;
s13: making a standard training set label containing characteristics, wherein the characteristics comprise brands, vehicle types, system numbers and fault descriptions;
s14: cleaning abnormal data in the labels of the training set;
s15: and (3) extracting a verification set by layered sampling, and ensuring that each label of the verification set has at least more than one piece of test data, and all data of the verification set is different from the data of the training set.
Further, the step S20 of building the model includes:
building a label and model input index mapping dictionary;
adopting 12 layers of BERT model to stack, and obtaining the weight of Value after normalization by a normalization index function so as to achieve the effect of attention;
matching the number of the labels in the ascending or descending dimension, and performing dimension conversion by adopting a full connection layer with the number of the neurons matched with the number of the labels;
a layer of random inactivation was added to prevent overfitting of the model.
Further, a focus loss function is adopted as the loss function.
Further, when the fault code is analyzed: aggregating fault codes according to the system ID reported by the automobile; matching the upper-level catalog according to the inferred automobile parts and components and pushing out the upper-level catalog; and carrying out frequency sequencing display according to the zero component group of the same system, wherein the higher the frequency is, the position is more forward.
Further, the learning rate decay function adopts a preheating learning rate mode: training is performed with an initial small learning rate and then gradually increased until a preset larger learning rate is reached.
Further, in step S20, when model deployment is performed, the Triton inference service of england corporation is used, and the trained pitorch model is converted into TensorRT usage.
In another aspect, the present invention further provides an electronic device, which includes a calculation module for calculating the steps of the automobile fault code analysis method described above.
In still another aspect, the present invention further provides a computer readable storage medium, on which computer instructions are stored, and when the computer instructions are executed, the steps of the automobile fault code analysis method are executed.
In still another aspect, the present invention further provides an electronic device, which includes a memory and a processor, where the memory stores computer instructions, and when the computer instructions run, the processor performs the steps of the automobile fault code analysis method described above.
According to the technical scheme, the characteristic information points of the automobile fault codes are learned by means of a strong attention mechanism of a BERT model, decoding is carried out according to the labels processed by the user, automobile parts or subsystems which possibly have faults are recommended to the user, and the inspection range is specified. Under the condition that the automobile second-hand trader lacks professional knowledge, the popular meaning of the fault code can be obtained, the position of a device to be checked can be further quickly known, and the maintenance cost and the frame can be conveniently and quickly evaluated.
Drawings
FIG. 1 is a schematic flowchart illustrating steps of an embodiment of a method for analyzing a vehicle fault code according to the present invention;
FIG. 2 is a schematic diagram of Warmup variation in an automobile fault code analysis method according to the present invention;
FIG. 3 is a ring-all-reduce gradient updating diagram in the method for analyzing the automobile fault code according to the present invention;
fig. 4 is a google lenet original calculation chart in the method for analyzing the automobile fault code according to the invention;
FIG. 5 is a schematic diagram of operator longitudinal fusion in the method for analyzing the automobile fault code according to the present invention;
fig. 6 is a schematic diagram of lateral fusion of operators in the method for analyzing the automobile fault code according to the present invention.
Detailed Description
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail below. It is understood that the specific details described below are merely exemplary of some embodiments of the invention, and that the invention may be practiced in many other embodiments than as described herein. All other embodiments, which can be obtained by a person skilled in the art based on the embodiments of the present invention without any creative effort, belong to the protection scope of the present invention.
It will be understood that when an element is referred to as being "secured to" another element, it can be directly on the other element or intervening elements may also be present. When an element is referred to as being "connected" to another element, it can be directly connected to the other element or intervening elements may also be present. The terms "vertical," "horizontal," "left," "right," and the like as used herein are for illustrative purposes only and do not represent the only embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
In an embodiment, referring to fig. 1, the present invention provides a method for analyzing a fault code of an automobile, including the steps of:
s10: acquiring data of popular records of fault codes, and cleaning the data to make a standardized label training set; the method comprises the following specific steps:
acquiring popular record data of fault codes; cleaning data, and removing abnormal values in popular records of fault codes; making a standard training set label containing characteristics, wherein the characteristics comprise brands, vehicle types, system numbers and fault descriptions; cleaning abnormal data in the labels of the training set; and (3) extracting a verification set by layered sampling, and ensuring that each label of the verification set has at least more than one piece of test data, and all data of the verification set is different from the data of the training set.
S20: importing a label training set for model training, wherein: adopting BERT as a training model, adopting a cross entropy function or a focus loss function as a loss function, adopting an Adam optimizer to calculate and update network parameters, and carrying out distributed training through a Pythrch frame;
wherein, when building the model, include:
building a label and model input index mapping dictionary;
adopting 12 layers of stack of a BERT model, and obtaining the weight of Value after normalization by a normalization index function so as to achieve the effect of attention;
matching the number of the labels in the ascending or descending dimension, and performing dimension conversion by adopting a full connection layer with the number of the neurons matched with the number of the labels;
a layer of random inactivation was added to prevent overfitting of the model.
In the basic quotient of the embodiment, the learning rate attenuation function adopts a preheating learning rate mode: training is performed with an initial small learning rate and then gradually increased until a preset larger learning rate is reached.
When model deployment is carried out, triton reasoning service of England corporation is used, and the trained Pytrch model is converted into TensorRT for use.
It should be noted that BERT is called bidirectional encoder responses from Transformer, and is a natural language processing model.
The PyTorch framework is an open-source Python machine learning library, and is used for applications such as natural language processing and the like based on Torch. PyTorch was introduced by Facebook artificial intelligence institute (FAIR) based on Torch. It is a Python-based continuous computation package that provides two high-level functions: 1. there is a strong GPU accelerated tensor computation (e.g., numPy). 2. A deep neural network comprising an automatic derivation system.
TensorRT is a high-performance deep learning reasoning SDK delayed by nvidia; the deep learning inference optimizer and the runtime acceleration library are high in performance, and can provide low-delay and high-throughput deployment inference for deep learning application.
It should be noted that the above names of BERT, pyTorch, tensorRT, etc. are well known in the art of deep learning, and therefore, are not explained in detail.
S30: and inputting the automobile fault code into the trained model, and analyzing to obtain the automobile parts needing to be overhauled. When the fault code is analyzed: aggregating fault codes according to the system ID reported by the automobile; matching the upper-level catalog according to the inferred automobile parts and components and pushing out the upper-level catalog; and performing frequency sequencing display according to the zero accessory group of the same system, wherein the higher the frequency is, the more the position is.
According to the technical scheme, the characteristic information points of the automobile fault codes are learned by means of a strong attention mechanism of a BERT model, decoding is carried out according to the labels processed by the user, automobile parts or subsystems which possibly have faults are recommended to the user, and the inspection range is specified. Under the condition that the automobile second-hand trader lacks professional knowledge, the popular meaning of the fault code can be obtained, the position of a device to be checked can be further quickly known, and the maintenance cost and the frame can be conveniently and quickly evaluated.
In another aspect, the present invention further provides an electronic device, which includes a calculation module for calculating the steps of the automobile fault code analysis method described above.
In still another aspect, the present invention further provides a computer readable storage medium, on which computer instructions are stored, and when the computer instructions are executed, the steps of the automobile fault code analysis method are executed.
In yet another aspect, the present invention further provides an electronic device, which includes a memory and a processor, where the memory stores computer instructions, and the processor executes the steps of the automobile fault code analysis method as described above when the computer instructions are executed.
In one embodiment of the present invention, the substrate is,
1. the model was built as follows:
a. selecting a pre-training model, and selecting BERT-BASE;
b. and building a label and model input index mapping dictionary. The corresponding label can be conveniently selected for use after subsequent decoding;
c. network construction, wherein the BERT Model is selected mainly for the following two reasons, ambiguity caused by error input can be corrected to a certain extent by virtue of an MLM (Masked Language Model) pre-training task of the BERT,
the deviation of the prediction result is caused, and secondly, a Multi-Head Attention mechanism (Multi-Head Attention) is another more important function, so that the Attention of the model is focused on the characteristics which can enable the model to be correctly classified in the fault description of model learning;
the system can also have excellent expression on the challenge set, and the multi-head attention mechanism has the following important roles:
a) The ability of the model to focus on different locations is extended.
b) Multiple "presentation subspaces" are provided for the attention layer. For a multi-head attention mechanism, the system not only has a head, but also has a plurality of groups of Query/Key/Value weight matrixes, and each weight matrix set is initialized randomly, so that the diversity of characteristics is greatly improved. After training, each group acts to project the input Embedding into a different representation subspace. The focus of attention learned by each head may be slightly different, thus giving the model a larger capacity.
The BERT capability of linking context is more advantageous than the traditional RNN or LSTM model;
attention is paid to the calculation formula (1), when the calculation is actually carried out in the code, the calculation is carried out in a matrix form, BERT has an Encoder part of a Transformer, the default is 12-layer stacking, and each position of the Encoder can notice all positions of the Encoder in the previous layer. The super parameter can be adjusted according to the needs of the user, the user adopts a default 12 layers, and after the normalization through a softmax (normalized exponential function), the obtained Value is the weight of Value, so that the attention effect is achieved.
Figure BDA0003943843910000081
Wherein is divided by
Figure BDA0003943843910000082
And d is the last dimension of Q, K and V.
f. After 12-layer Encoding calculation of BERT, dimension increasing or dimension reducing is needed to match the number of labels, and dimension conversion is carried out by using full connected layers of the number of neurons matched with the number of the labels;
g. and finally, adding a layer of Dropout (random inactivation) to prevent overfitting of the model, if necessary, adding a layer of softmax to normalize the model, wherein the final result is an index of the maximum value and does not influence the result value, and the Dropout is not added by the user.
2. Selection of loss function
In this solution, we can take the following two loss functions:
a. because of the classification task, the classical Cross Entropy (Cross Entropy) can be selected, and the Focal loss (focus loss function) which is more excellent in multi-classification can be adopted;
b. the design of Focal loss aims to solve the problem that a sample is difficult to be seriously unbalanced when the ratio of positive and negative samples in one-stage is detected by a target. The weight occupied by a large number of simple negative samples in training is reduced, and the following formula (2) and formula (3) are compared to see that the Focal loss adds a prediction probability P in the original cross entropy loss t And a hyperparameter γ.
Figure BDA0003943843910000091
FL(P t )=-(1-p t )γlog(p t ) (3)
3. Optimizer selection
a. An optimizer Adam (Adaptive motion Estimation) with excellent comprehensive working performance under most conditions is selected;
b. the core formula is shown in the following formula (4). Except that the square v of the past gradient is stored as in Adadelta and RMSprop optimizers t The exponentially decaying average of (c) also maintains the past gradient m like momentum t An exponential decay average of (d);
Figure BDA0003943843910000092
first order gradient: m is a unit of t =β 1 m t-1 +(1-β 1 )g t
Second order gradient:
Figure BDA0003943843910000093
first order gradient bias correction:
Figure BDA0003943843910000094
second order gradient bias correction:
Figure BDA0003943843910000095
such as m t And v t Initialized to a 0 vector, then they will be biased towards 0;
in this example,. Beta. 1 =0.9,β 2 =0.999,∈=10 -8
c. The learning rate attenuation function is selected in a Warmup (preheating learning rate) mode, namely, training is carried out by using an initial small learning rate, then each step is increased by a little until an initially set larger learning rate is reached (note: preheating learning rate is completed at the moment), and training is carried out by using the initially set learning rate (note: training process after preheating learning rate is completed, learning rate is attenuated), so that the model convergence rate is accelerated, the effect is better, and the effect graph is shown in fig. 2.
Distributed training
In this embodiment, a pytorech frame is adopted, a distributed training interface DDP (distributed data parallel) of the pytorech is used, which is faster than a DP (data parallel), and the DDP can implement multiple machines, multiple cards, and multiple processes, while the DP can only implement a single machine, multiple cards, and a single process, and the gradient update strategy is also a ring-all-reduce manner, as shown in fig. 3.
5. Deployment
a. The Triton inference service developed by NVIDIA corporation was used. The Triton inference server is a machine learning model server of NVIDIA. While Triton runs on both the CPU and GPU simultaneously, it aims to take advantage of the functionality of the GPU through advanced functions such as dynamic batch processing, concurrent model execution, model prioritization, and model hot load/unload.
b. When the model is deployed to a Triton server, the best effect is achieved by converting the trained Tensorflow model or Pyorch into the TensorRT,
a) TensorRT can compress, optimize, and deploy the network at runtime without the overhead of a framework. TensorRT optimizes and selects through combinations layers, kernel, and executes the method of normalization and converting into the optimal matrix according to the specified precision, thereby improving the delay, the throughput and the efficiency of the network.
b) Keys, kernel, a function run on a GPU is called Kernel, and TensorRT is called by Kernel, and frequent Kernel calls bring performance overhead, which is mainly reflected in: scheduling overhead of the dataflow graph, startup overhead of the GPU kernel functions, and data transmission overhead between kernel functions. In most networks, a convolution conv layer, a bias layer and an activation relu layer exist continuously, the three layers need to call an API corresponding to cuDNN three times, and actually, the three operators can be fused (merged) and merged into a CBR structure. The understanding can be aided by reference to the GoogleLeNet network optimization diagrams provided by the authorities, as shown in FIGS. 4, 5 and 6.
c) The convolution of 5x5 can be further optimized by replacing two 3x3, secondly, the network weight of TensorRT can be converted into FP16 (semi-precision), and INT8 can be considered on the premise of meeting the precision requirement, so that the inference speed of the neural network model can be greatly increased, and the file size of the model is reduced. If no GPU is available, the method is converted into ONNX, the CPU is used for deploying to Triton, and compared with the method that the original tenserflow is directly deployed or the pytorch model is directly deployed, the speed is increased by at least 10 times;
c. and (5) processing the inference data. Processing input data into input in training, wherein the BERT model input comprises three parts: input _ ids, attribution _ mask and token _ type _ ids, wherein the data _ type depends on the type of a deployment model, if the deployment model is TensorRT and data _ type = INT32, if the deployment model is ONNX, data _ type = INT64 and max _ seq _ length are lengths during training and are processed into a json serialized format, a POST request is used for sending a request to a Triton server, and custom decoding is carried out when a model returns a result;
d. sending a request for reasoning to obtain result _ json data;
e. decoding, namely performing decoding editing according to the service requirement of the user, and displaying the user in the application layer by the following three logics at the present stage;
a) Aggregating the fault codes according to the system id of one automobile report;
b) Matching the upper level catalog according to the inferred automobile spare parts and pushing out the upper level catalog together;
c) And carrying out frequency sequencing display according to the zero component group of the same system, wherein the higher the frequency is, the more the front the frequency is.
In the field of automobile fault codes, the knowledge and solution of automobile parts needing to be overhauled are analyzed from the automobile fault codes by reasoning through a BERT model. In the field of the current fault codes, the fault reason and the maintenance method are known by looking over standard codes (SAE/ISO) or codes controlled by an automobile manufacturer, or professional automobile maintenance personnel manually mark automobile parts needing to be checked, but the scheme utilizes data accumulated by the traditional method to carry out deep learning training in the field of NLP, replaces manual marking or checking parts prompted on a manual, and has high efficiency and considerable accuracy. According to the current homemade challenge set verification set (data is completely invisible to the model), the verification accuracy can reach 86%, in the simulation set (data of 7 days after the training date is adopted in the experiment), the accuracy is over 95%, the accuracy of the data (trained) which is visible to the model is close to 100%, and the accuracy can be improved along with the accumulation of the data and the generalization capability of the model.
This technical scheme, according to the automobile fault sign indicating number, recommend need to inspect automobile spare part, let non-professional auto repair personnel also can fix a position the trouble position of car or the concrete automobile spare part that needs the inspection, like this can in order to let the convenient automobile spare part who fixes a position the trouble of customer, make things convenient for the dealer to carry out the car valuation and retrieve, or non-dealer's customer, want to judge scene such as trouble position or maintenance cost by oneself, all provide very big facility.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present invention, and the description thereof is specific and detailed, but not to be understood as limiting the scope of the invention. It should be noted that, for those skilled in the art, various changes, substitutions and alterations can be made without departing from the spirit and scope of the invention. Therefore, the protection scope of the present patent shall be subject to the claims.

Claims (10)

1. A method for analyzing automobile fault codes is characterized by comprising the following steps:
s10: acquiring data of popular records of fault codes, and cleaning the data to make a standardized label training set;
s20: importing a label training set for model training, wherein: adopting BERT as a training model, adopting a cross entropy function or a focus loss function as a loss function, adopting an Adam optimizer to calculate and update network parameters, and carrying out distributed training through a Pythrch frame;
s30: and inputting the automobile fault code into the trained model, and analyzing to obtain the automobile parts needing to be overhauled.
2. The method according to claim 1, wherein step S10 comprises:
s11: acquiring popular record data of fault codes;
s12: cleaning data, and removing abnormal values in popular records of fault codes;
s13: making a standard training set label containing characteristics, wherein the characteristics comprise brands, vehicle types, system numbers and fault descriptions;
s14: cleaning abnormal data in the labels of the training set;
s15: and (3) extracting a verification set by layered sampling, and ensuring that each label of the verification set has at least more than one piece of test data, and all data of the verification set is different from the data of the training set.
3. The method according to claim 1, wherein the step S20 of building the model comprises:
building a label and model input index mapping dictionary;
adopting 12 layers of BERT model to stack, and obtaining the weight of Value after normalization by a normalization index function so as to achieve the effect of attention;
matching the number of the labels in the ascending or descending dimension, and performing dimension conversion by adopting a full connection layer with the number of the neurons matched with the number of the labels;
a layer of random deactivation was added to prevent overfitting of the model.
4. The method according to claim 1, wherein in step S20, a focus loss function is used as the loss function.
5. The method according to claim 1, wherein, when analyzing the fault code in step S30: aggregating fault codes according to the system ID reported by the automobile; matching the upper level catalog according to the inferred automobile spare parts and pushing out the upper level catalog together; and performing frequency sequencing display according to the zero accessory group of the same system, wherein the higher the frequency is, the more the position is.
6. The method according to claim 1, wherein in step S20, the learning rate decay function adopts a preheating learning rate mode: training is performed with an initial small learning rate and then gradually increased until a preset larger learning rate is reached.
7. The method of claim 1, wherein in step S20, the Triton inference service of imperial great company is used and the trained Pytorch model is converted into TensorRT usage.
8. An electronic device characterized by comprising a calculation module for calculating the steps of the car fault code analysis method according to any one of claims 1 to 7.
9. A computer readable storage medium having computer instructions stored thereon, wherein the computer instructions are executed to perform the steps of the car fault code analysis method according to any one of claims 1 to 7.
10. An electronic device comprising a memory and a processor, wherein the memory stores computer instructions, and the processor executes the steps of the automobile fault code analysis method according to any one of claims 1 to 7 when the computer instructions are executed.
CN202211429024.XA 2022-11-15 2022-11-15 Automobile fault code analysis method, storage medium and electronic equipment Pending CN115687330A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211429024.XA CN115687330A (en) 2022-11-15 2022-11-15 Automobile fault code analysis method, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211429024.XA CN115687330A (en) 2022-11-15 2022-11-15 Automobile fault code analysis method, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN115687330A true CN115687330A (en) 2023-02-03

Family

ID=85051297

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211429024.XA Pending CN115687330A (en) 2022-11-15 2022-11-15 Automobile fault code analysis method, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN115687330A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522372A (en) * 2023-10-17 2024-02-06 深圳市明睿数据科技有限公司 Deep learning-based maintenance suggestion generation method and system for automobile fault model

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117522372A (en) * 2023-10-17 2024-02-06 深圳市明睿数据科技有限公司 Deep learning-based maintenance suggestion generation method and system for automobile fault model

Similar Documents

Publication Publication Date Title
CN109934341A (en) The model of training, verifying and monitoring artificial intelligence and machine learning
CN116089873A (en) Model training method, data classification and classification method, device, equipment and medium
CN111695024A (en) Object evaluation value prediction method and system, and recommendation method and system
CN115687330A (en) Automobile fault code analysis method, storage medium and electronic equipment
CN114139589A (en) Fault diagnosis method, device, equipment and computer readable storage medium
CN111258909A (en) Test sample generation method and device
CN117472789B (en) Software defect prediction model construction method and device based on ensemble learning
CN114037139A (en) Freight vehicle warehouse stay time length prediction method based on attention mechanism
US11373285B2 (en) Image generation device, image generation method, and image generation program
CN116522912B (en) Training method, device, medium and equipment for package design language model
CN114048817A (en) Deep learning input set priority test method based on variation strategy
CN117151222B (en) Domain knowledge guided emergency case entity attribute and relation extraction method thereof, electronic equipment and storage medium
CN117235638A (en) Police condition content multilayer classification method based on pre-training model
CN109543571B (en) Intelligent identification and retrieval method for special-shaped processing characteristics of complex products
US20210356920A1 (en) Information processing apparatus, information processing method, and program
CN110879821A (en) Method, device, equipment and storage medium for generating rating card model derivative label
CN116188025A (en) Screening method, device, equipment and medium of E-commerce platform provider
US20240020531A1 (en) System and Method for Transforming a Trained Artificial Intelligence Model Into a Trustworthy Artificial Intelligence Model
CN111160419B (en) Deep learning-based electronic transformer data classification prediction method and device
CN110334353B (en) Analysis method, device, equipment and storage medium for word sequence recognition performance
CN113689234A (en) Platform-related advertisement click rate prediction method based on deep learning
US20220284988A1 (en) Predictive engine maintenance apparatuses, methods, systems and techniques
CN115796480B (en) Multi-tenant mode based order matching method and system
CN116932487B (en) Quantized data analysis method and system based on data paragraph division
US20240144050A1 (en) Stacked machine learning models for transaction categorization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication