CN113239975A - Target detection method and device based on neural network - Google Patents

Target detection method and device based on neural network Download PDF

Info

Publication number
CN113239975A
CN113239975A CN202110429697.4A CN202110429697A CN113239975A CN 113239975 A CN113239975 A CN 113239975A CN 202110429697 A CN202110429697 A CN 202110429697A CN 113239975 A CN113239975 A CN 113239975A
Authority
CN
China
Prior art keywords
model
loss
target
layer
loss prediction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110429697.4A
Other languages
Chinese (zh)
Other versions
CN113239975B (en
Inventor
付新意
方雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baiyin Power Supply Company State Grid Gansu Electric Power Co
Original Assignee
Luoyang Qingniao Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luoyang Qingniao Network Technology Co ltd filed Critical Luoyang Qingniao Network Technology Co ltd
Priority to CN202110429697.4A priority Critical patent/CN113239975B/en
Publication of CN113239975A publication Critical patent/CN113239975A/en
Application granted granted Critical
Publication of CN113239975B publication Critical patent/CN113239975B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention relates to a target detection method and device based on a neural network, which comprises the steps of obtaining an initial training sample set, obtaining a first loss prediction result corresponding to each initial sample based on a preset classification model and a loss prediction model, screening target samples from each initial sample according to the first loss prediction result, forming a target training sample set by each target sample, labeling the target training sample set to obtain labeled data, inputting the training sample set and the labeled data into the preset neural network for training to obtain a target detection model, and carrying out target detection according to the target detection model. Compared with a manual acquisition mode, the target detection method provided by the invention saves time, does not need to invest manpower, and can reduce the number of samples with poor quality or irrelevant quality, so that the detection performance of the network model obtained by subsequent training is improved, and the detection result is finally improved.

Description

Target detection method and device based on neural network
Technical Field
The invention relates to a target detection method and device based on a neural network.
Background
At present, data processing methods based on Neural networks are more and more widely applied, and the more mainstream Neural networks include deep Neural networks dnn (deep Neural networks), convolutional Neural networks cnn (convolutional Neural networks), recurrent Neural networks rnn (recurrent Neural networks), and attention-based transform models. The mainstream data processing method based on the neural network is a target detection method based on the neural network, targets are different in different application scenes, and the method can be a defect detection method, a fault detection method and the like. The first step of the target detection method based on the neural network is to acquire a training sample set, wherein the training sample set comprises a plurality of sample data, such as sample images. However, the current acquisition mode of the training sample set is a manual acquisition mode, which is time-consuming and labor-consuming, and sample data with poor quality or irrelevant quality may exist, so that the detection performance of the network model obtained by subsequent training is affected, and the detection result is finally affected.
Disclosure of Invention
The invention provides a target detection method and device based on a neural network, which are used for solving the technical problem that the detection result is influenced due to poor detection performance of a network model obtained by training in the existing target detection method based on the neural network.
The invention adopts the following technical scheme:
a target detection method based on a neural network comprises the following steps:
acquiring an initial training sample set, wherein the initial training sample set comprises at least two initial samples;
obtaining a first loss prediction result corresponding to each initial sample based on a preset classification model and a loss prediction model, and screening target samples from each initial sample according to the first loss prediction result corresponding to each initial sample, wherein each target sample forms a target training sample set;
labeling the target training sample set to obtain labeled data;
inputting the training sample set and the labeled data into a preset neural network for training to obtain a target detection model;
and carrying out target detection according to the target detection model.
Preferably, the obtaining of the first loss prediction result corresponding to each initial sample based on the preset classification model and the loss prediction model specifically includes:
obtaining a first layer feature vector of an initial sample according to the convolution layer in the classification model;
and obtaining a first loss prediction result of the first layer feature vector according to the loss prediction model.
Preferably, the classification model includes at least two convolutional layers, each convolutional layer outputting a first layer feature vector;
the loss prediction model comprises at least two loss prediction submodels and a classifier, wherein each loss prediction submodel corresponds to each convolution layer one by one, and the input of each loss prediction submodel is a first layer of feature vector output by the corresponding convolution layer;
correspondingly, the obtaining of the first loss prediction result of the first-layer feature vector according to the loss prediction model specifically includes:
for any first-layer feature vector, inputting the first-layer feature vector into a loss predictor model corresponding to the first-layer feature vector to obtain a first vector output by the loss predictor model aiming at the first-layer feature vector;
obtaining second vectors according to the first vectors;
and obtaining the first loss prediction result according to the second vector and the classifier.
Preferably, the first loss prediction result corresponding to each initial sample comprises a prediction loss value corresponding to each initial sample;
the method comprises the following steps of screening out a target sample from each initial sample according to a first loss prediction result corresponding to each initial sample, specifically:
and comparing the predicted loss value corresponding to each initial sample with a preset loss threshold value, and acquiring the initial sample corresponding to the predicted loss value which is greater than or equal to the preset loss threshold value to obtain the target sample.
Preferably, the obtaining process of the classification model and the loss prediction model comprises the following steps:
obtaining a model training sample set, wherein the model training sample set comprises at least two model training samples and labels corresponding to the model training samples;
inputting the model training sample into a classification model;
obtaining a classification prediction result of the model training sample through the classification model, and obtaining a second layer of feature vectors output by the convolution layer in the classification model aiming at the model training sample;
obtaining a second loss prediction result of the second layer feature vector through a loss prediction model;
calculating a loss value according to the classification prediction result and the second loss prediction result respectively corresponding to each model training sample based on a preset loss function;
and performing iterative training according to the loss value obtained each time until the training is finished.
More preferably, the preset neural network comprises an encoder and a decoder, and the input of each layer of the encoder is set as a fusion of three parts: the output of the layer above the encoder layer, the output of the encoder layer and the output of the layer below the encoder layer, and the skip connection between the encoder and the decoder.
Preferably, the labeling the target training sample set to obtain labeled data includes:
and labeling the target training sample set through a Labelme tool to obtain the labeled data.
An object detection device based on a neural network comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the steps of the object detection method based on the neural network.
The invention has the beneficial effects that: the method comprises the steps of firstly obtaining an initial training sample set, wherein the initial training sample set comprises at least two initial samples, then not directly carrying out labeling and subsequent network training according to the initial training sample set, but obtaining a first loss prediction result corresponding to each initial sample based on a preset classification model and a loss prediction model, then screening target samples from each initial sample according to the first loss prediction result, wherein each target sample forms a target training sample set, namely screening target samples meeting requirements from each initial sample, labeling the target training sample set to obtain labeled data, inputting the training sample set and the labeled data into a preset neural network for training to obtain a target detection model, and finally carrying out target detection according to the target detection model. Therefore, the acquisition mode of the training sample set in the target detection method based on the neural network is automatic acquisition, compared with a manual acquisition mode, the time is saved, the labor is not required to be input, and in addition, the target samples are screened from all initial samples through the preset classification model and the loss prediction model, the number of samples with poor quality or irrelevant quality can be reduced, the detection performance of the network model obtained through subsequent training is further improved, and the detection result is finally improved.
Drawings
Fig. 1 is a schematic overall flow chart of a target detection method based on a neural network provided by the present invention.
Detailed Description
The present embodiment provides a target detection method based on a neural network, as shown in fig. 1, including the following steps:
step 1: obtaining an initial training sample set, the initial training sample set comprising at least two initial samples:
an initial training sample set is obtained, the initial training sample set being a training sample set without screening processing, the initial training sample set comprising at least two initial samples, it being understood that the initial training sample set comprises a sufficient number of initial samples to facilitate screening thereof.
The initial samples in the initial training sample set are determined by the specific application scenario of the target detection method, such as: if the target detection method is used for defect detection, the initial samples in the initial training sample set may include sample images of defective portions.
Step 2: obtaining a first loss prediction result corresponding to each initial sample based on a preset classification model and a loss prediction model, and screening target samples from each initial sample according to the first loss prediction result corresponding to each initial sample, wherein each target sample forms a target training sample set:
the step 2 is divided into two substeps, which are respectively: acquiring a first loss prediction result corresponding to each initial sample based on a preset classification model and a loss prediction model; and screening target samples from the initial samples according to the first loss prediction result corresponding to each initial sample, wherein each target sample forms a target training sample set. Wherein:
step 2-1: obtaining a first loss prediction result corresponding to each initial sample based on a preset classification model and a loss prediction model:
the specific type of classification model is set by actual requirements, such as a linear support vector machine model. The loss prediction model is used for calculating a loss value according to the sample, the specific type of the loss prediction model is set according to actual needs, for example, the loss prediction model may include a pooling layer, a fully-connected layer and a nonlinear layer, and the number and specific structure of each layer are not limited.
Moreover, the classification model may include only one convolutional layer, and in this embodiment, the classification model includes at least two convolutional layers, each of which outputs a first-layer feature vector. In this embodiment, each first-layer feature vector obtained from each convolution layer may be regarded as a feature vector in which the feature extraction depth gradually increases. The loss prediction model comprises at least two loss prediction submodels and a classifier, wherein each loss prediction submodel corresponds to each convolution layer one by one, and the input of each loss prediction submodel is the first layer of feature vectors output by the corresponding convolution layer.
For any initial sample, first layer feature vectors of the initial sample are obtained according to the convolution layer in the classification model, and then first loss prediction results of the first layer feature vectors are obtained according to the loss prediction model, so that first loss prediction results corresponding to the initial sample are obtained. And further obtaining a first loss prediction result corresponding to each initial sample. In this embodiment, the first layer of feature vectors of feature information of different depths are combined to obtain a first loss prediction result, so that the one-sidedness problem caused by a single feature can be avoided, and the accuracy of loss prediction is further improved.
Since the loss prediction model includes at least two loss prediction submodels, a first loss prediction result of the first-layer feature vector is obtained according to the loss prediction model, specifically:
for any initial sample, the initial sample corresponds to the first layer of feature vectors with the same number as the convolutional layers, and for any first layer of feature vectors, the first layer of feature vectors are input into a loss predictor model corresponding to the first layer of feature vectors, so that a first vector output by the loss predictor model aiming at the first layer of feature vectors is obtained.
Then, a second vector is obtained according to each first vector, wherein each first vector can be spliced to obtain the second vector, and elements at the same position in each first vector can also be averaged to obtain the second vector.
And finally, obtaining a first loss prediction result corresponding to the initial sample according to the obtained second vector and the classifier.
Step 2-2: screening target samples from the initial samples according to the first loss prediction result corresponding to each initial sample, wherein each target sample forms a target training sample set:
the first loss prediction result corresponding to each initial sample comprises a prediction loss value corresponding to each initial sample. Then, according to the first loss prediction result corresponding to each initial sample, screening out a target sample from each initial sample specifically includes:
and presetting a loss threshold value, comparing the predicted loss value corresponding to each initial sample with the preset loss threshold value, and acquiring initial samples corresponding to the predicted loss value which is greater than or equal to the preset loss threshold value, wherein the initial samples are target samples. Each target sample constitutes a set of target training samples.
The classification model and the loss prediction model may be constructed in advance and may be used directly, and this embodiment provides a specific acquisition process of the classification model and the loss prediction model:
and obtaining a model training sample set, wherein the model training sample set is used for training the classification model and the loss prediction model. The model training sample set comprises at least two model training samples (the specific number is set according to actual needs), and labels corresponding to the model training samples. Wherein the labels are used for representing the classes of the corresponding model training samples.
For any model training sample, inputting the model training sample into a classification model, obtaining a classification prediction result of the model training sample through the classification model, and obtaining a second layer of feature vectors output by a convolution layer in the classification model aiming at the model training sample.
And obtaining a second loss prediction result of the second layer feature vector through the loss prediction model. And further obtaining a classification prediction result and a second loss prediction result corresponding to each model training sample.
And calculating a loss value according to the classification prediction result and the second loss prediction result respectively corresponding to each model training sample based on a preset loss function. It will be appreciated that the loss value is determined by the particular type of pre-set loss function, which is typically a cross-entropy loss function.
And obtaining a loss value in each iterative training process, and performing iterative training according to the condition met by the loss value obtained each time until the training is finished. In this embodiment, when the loss value is greater than the preset loss value, which indicates that the difference between the data obtained by training and the real data is large, the training is performed again, so that the data obtained by training gradually approaches the real data. Until the data obtained by training is equal to the real data, or the iteration times reach the preset times.
And step 3: labeling the target training sample set to obtain labeled data:
in order to better realize target detection, a target training sample set is labeled to obtain labeled data. It should be understood that the labeling object is determined by an actual application scenario, for example, a defect portion in a sample image is labeled, and the obtained labeling data is a labeling result obtained by labeling the defect portion in each sample image in the target training sample set. In this embodiment, a target training sample set is labeled by a label tool to obtain labeled data.
And 4, step 4: inputting the training sample set and the labeling data into a preset neural network for training to obtain a target detection model:
and after the training sample set and the labeled data are obtained, inputting the training sample set and the labeled data into a preset neural network for training to obtain a target detection model. The type of the preset neural network is not limited, for example: deep Neural Networks (DNN), Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and attention-based transform models. The specific training process for different neural networks is different.
In this embodiment, the preset neural network includes an encoder and a decoder, and the input of each layer of the encoder is set as fusion of three parts: the output of the last layer of this layer of encoder, the output of this layer of encoder and the output of the next layer of this layer of encoder can improve the segmentation performance of network, and, the jump connection between encoder and the decoder, the jump connection structure who increases different ranks between segmentation encoder and the segmentation decoder promptly can strengthen image context information to improve the characteristic extraction ability, promote the network promptly to the segmentation ability of small target.
One specific training process is given below: inputting a training sample set into an encoder, extracting the characteristics of the defect part on different scales by the encoder through a plurality of convolutional layers and pooling layers, outputting the characteristics to a decoder by the encoder, performing up-sampling by the decoder through the convolutional layers and the up-sampling layers, and outputting a semantic segmentation graph with the same size as the training sample set. And (3) operating the obtained semantic segmentation graph and the labeled data through a cross entropy loss function (namely, carrying out repeated iterative training), optimizing parameters in the model to enable the training result to be gradually close to the real condition (namely, to be gradually close to the labeled data), and storing the network parameters after the network training is finished.
And 5: and carrying out target detection according to the target detection model:
and after the target detection model is obtained, carrying out target detection according to the target detection model. Such as: and inputting the image to be detected into the trained target detection model, and segmenting the image to be detected through the target detection model to realize target detection in the image to be detected. Such as: and inputting the image to be detected into the trained model, and segmenting the image to be detected through the model to realize accurate detection of the defects in the image to be detected.
The embodiment also provides an object detection device based on a neural network, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor implements the steps of the object detection method based on the neural network when executing the computer program. Since the target detection method based on the neural network is described in detail above, it is not described in detail.

Claims (8)

1. A target detection method based on a neural network is characterized by comprising the following steps:
acquiring an initial training sample set, wherein the initial training sample set comprises at least two initial samples;
obtaining a first loss prediction result corresponding to each initial sample based on a preset classification model and a loss prediction model, and screening target samples from each initial sample according to the first loss prediction result corresponding to each initial sample, wherein each target sample forms a target training sample set;
labeling the target training sample set to obtain labeled data;
inputting the training sample set and the labeled data into a preset neural network for training to obtain a target detection model;
and carrying out target detection according to the target detection model.
2. The target detection method based on the neural network according to claim 1, wherein the obtaining of the first loss prediction result corresponding to each initial sample based on the preset classification model and the loss prediction model specifically includes:
obtaining a first layer feature vector of an initial sample according to the convolution layer in the classification model;
and obtaining a first loss prediction result of the first layer feature vector according to the loss prediction model.
3. The neural network-based object detection method of claim 2, wherein the classification model includes at least two convolutional layers, each convolutional layer outputting a first layer feature vector;
the loss prediction model comprises at least two loss prediction submodels and a classifier, wherein each loss prediction submodel corresponds to each convolution layer one by one, and the input of each loss prediction submodel is a first layer of feature vector output by the corresponding convolution layer;
correspondingly, the obtaining of the first loss prediction result of the first-layer feature vector according to the loss prediction model specifically includes:
for any first-layer feature vector, inputting the first-layer feature vector into a loss predictor model corresponding to the first-layer feature vector to obtain a first vector output by the loss predictor model aiming at the first-layer feature vector;
obtaining second vectors according to the first vectors;
and obtaining the first loss prediction result according to the second vector and the classifier.
4. The method of claim 2, wherein the first loss prediction result corresponding to each initial sample comprises a predicted loss value corresponding to each initial sample;
the method comprises the following steps of screening out a target sample from each initial sample according to a first loss prediction result corresponding to each initial sample, specifically:
and comparing the predicted loss value corresponding to each initial sample with a preset loss threshold value, and acquiring the initial sample corresponding to the predicted loss value which is greater than or equal to the preset loss threshold value to obtain the target sample.
5. The method for detecting the target based on the neural network as claimed in claim 2, wherein the obtaining process of the classification model and the loss prediction model comprises:
obtaining a model training sample set, wherein the model training sample set comprises at least two model training samples and labels corresponding to the model training samples;
inputting the model training sample into a classification model;
obtaining a classification prediction result of the model training sample through the classification model, and obtaining a second layer of feature vectors output by the convolution layer in the classification model aiming at the model training sample;
obtaining a second loss prediction result of the second layer feature vector through a loss prediction model;
calculating a loss value according to the classification prediction result and the second loss prediction result respectively corresponding to each model training sample based on a preset loss function;
and performing iterative training according to the loss value obtained each time until the training is finished.
6. The neural network-based object detection method of claim 1, wherein the predetermined neural network comprises an encoder and a decoder, and the input of each layer of the encoder is set as a fusion of three parts: the output of the layer above the encoder layer, the output of the encoder layer and the output of the layer below the encoder layer, and the skip connection between the encoder and the decoder.
7. The method for detecting the target based on the neural network according to claim 1, wherein the labeling the target training sample set to obtain labeled data comprises:
and labeling the target training sample set through a Labelme tool to obtain the labeled data.
8. An apparatus for neural network based object detection, comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the neural network based object detection method as claimed in any one of claims 1-7 when executing the computer program.
CN202110429697.4A 2021-04-21 2021-04-21 Target detection method and device based on neural network Active CN113239975B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110429697.4A CN113239975B (en) 2021-04-21 2021-04-21 Target detection method and device based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110429697.4A CN113239975B (en) 2021-04-21 2021-04-21 Target detection method and device based on neural network

Publications (2)

Publication Number Publication Date
CN113239975A true CN113239975A (en) 2021-08-10
CN113239975B CN113239975B (en) 2022-12-20

Family

ID=77128756

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110429697.4A Active CN113239975B (en) 2021-04-21 2021-04-21 Target detection method and device based on neural network

Country Status (1)

Country Link
CN (1) CN113239975B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113542612A (en) * 2021-09-17 2021-10-22 深圳思谋信息科技有限公司 Lens anti-shake method and device, computer equipment and storage medium
CN113792798A (en) * 2021-09-16 2021-12-14 平安科技(深圳)有限公司 Model training method and device based on multi-source data and computer equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108121986A (en) * 2017-12-29 2018-06-05 深圳云天励飞技术有限公司 Object detection method and device, computer installation and computer readable storage medium
CN109902798A (en) * 2018-05-31 2019-06-18 华为技术有限公司 The training method and device of deep neural network
US20190251333A1 (en) * 2017-06-02 2019-08-15 Tencent Technology (Shenzhen) Company Limited Face detection training method and apparatus, and electronic device
CN110689048A (en) * 2019-09-02 2020-01-14 阿里巴巴集团控股有限公司 Training method and device of neural network model for sample classification
CN111814850A (en) * 2020-06-22 2020-10-23 浙江大华技术股份有限公司 Defect detection model training method, defect detection method and related device
CN112232426A (en) * 2020-10-21 2021-01-15 平安国际智慧城市科技股份有限公司 Training method, device and equipment of target detection model and readable storage medium
CN112561080A (en) * 2020-12-18 2021-03-26 Oppo(重庆)智能科技有限公司 Sample screening method, sample screening device and terminal equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190251333A1 (en) * 2017-06-02 2019-08-15 Tencent Technology (Shenzhen) Company Limited Face detection training method and apparatus, and electronic device
CN108121986A (en) * 2017-12-29 2018-06-05 深圳云天励飞技术有限公司 Object detection method and device, computer installation and computer readable storage medium
CN109902798A (en) * 2018-05-31 2019-06-18 华为技术有限公司 The training method and device of deep neural network
CN110689048A (en) * 2019-09-02 2020-01-14 阿里巴巴集团控股有限公司 Training method and device of neural network model for sample classification
CN111814850A (en) * 2020-06-22 2020-10-23 浙江大华技术股份有限公司 Defect detection model training method, defect detection method and related device
CN112232426A (en) * 2020-10-21 2021-01-15 平安国际智慧城市科技股份有限公司 Training method, device and equipment of target detection model and readable storage medium
CN112561080A (en) * 2020-12-18 2021-03-26 Oppo(重庆)智能科技有限公司 Sample screening method, sample screening device and terminal equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIAOMENG XIN,AND ETC: "Deep Self-Paced Learning for Semi-Supervised Person Re-Identification Using Multi-View Self-Paced Clustering", 《2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP)》 *
赵振兵等: "基于动态焦点损失函数和样本平衡方法的绝缘子缺陷检测方法", 《电力自动化设备》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113792798A (en) * 2021-09-16 2021-12-14 平安科技(深圳)有限公司 Model training method and device based on multi-source data and computer equipment
CN113542612A (en) * 2021-09-17 2021-10-22 深圳思谋信息科技有限公司 Lens anti-shake method and device, computer equipment and storage medium
CN113542612B (en) * 2021-09-17 2021-11-23 深圳思谋信息科技有限公司 Lens anti-shake method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113239975B (en) 2022-12-20

Similar Documents

Publication Publication Date Title
US20180181867A1 (en) Artificial neural network class-based pruning
CN111160469B (en) Active learning method of target detection system
CN111950453A (en) Optional-shape text recognition method based on selective attention mechanism
CN113239975B (en) Target detection method and device based on neural network
CN110135505B (en) Image classification method and device, computer equipment and computer readable storage medium
CN110892409B (en) Method and device for analyzing images
CN112766110A (en) Training method of object defect recognition model, object defect recognition method and device
CN110610123A (en) Multi-target vehicle detection method and device, electronic equipment and storage medium
CN110969600A (en) Product defect detection method and device, electronic equipment and storage medium
CN111178438A (en) ResNet 101-based weather type identification method
CN113628211A (en) Parameter prediction recommendation method, device and computer readable storage medium
CN114926441A (en) Defect detection method and system for machining and molding injection molding part
CN114782410A (en) Insulator defect detection method and system based on lightweight model
CN111901594A (en) Visual analysis task-oriented image coding method, electronic device and medium
CN112396594B (en) Method and device for acquiring change detection model, change detection method, computer equipment and readable storage medium
CN113536896B (en) Insulator defect detection method and device based on improved Faster RCNN and storage medium
CN114494168A (en) Model determination, image recognition and industrial quality inspection method, equipment and storage medium
CN111985549B (en) Deep learning method for automatic positioning and identification of components for given rigid body target
CN110751061B (en) SAR image recognition method, device, equipment and storage medium based on SAR network
CN115861305A (en) Flexible circuit board detection method and device, computer equipment and storage medium
CN112348011B (en) Vehicle damage assessment method and device and storage medium
CN116977239A (en) Defect detection method, device, computer equipment and storage medium
CN113920311A (en) Remote sensing image segmentation method and system based on edge auxiliary information
CN112488173B (en) Model training method and system based on image augmentation and storage medium
CN116958954B (en) License plate recognition method, device and storage medium based on key points and bypass correction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20221201

Address after: 730900 6 Renmin Road, Baiyin District, Baiyin City, Gansu Province

Applicant after: BAIYIN POWER SUPPLY COMPANY, STATE GRID GANSU ELECTRIC POWER Co.

Address before: 471000 1-1404, building 5, Washington, Junlin Plaza, No. 429, Zhongzhou Middle Road, Xigong District, Luoyang City, Henan Province

Applicant before: Luoyang Qingniao Network Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant