CN112907607A - Deep learning, target detection and semantic segmentation method based on differential attention - Google Patents

Deep learning, target detection and semantic segmentation method based on differential attention Download PDF

Info

Publication number
CN112907607A
CN112907607A CN202110277583.2A CN202110277583A CN112907607A CN 112907607 A CN112907607 A CN 112907607A CN 202110277583 A CN202110277583 A CN 202110277583A CN 112907607 A CN112907607 A CN 112907607A
Authority
CN
China
Prior art keywords
differential
attention
tensor
model
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110277583.2A
Other languages
Chinese (zh)
Other versions
CN112907607B (en
Inventor
李学生
李晨
牟春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Delu Power Technology Chengdu Co Ltd
Original Assignee
Delu Power Technology Chengdu Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Delu Power Technology Chengdu Co Ltd filed Critical Delu Power Technology Chengdu Co Ltd
Priority to CN202110277583.2A priority Critical patent/CN112907607B/en
Publication of CN112907607A publication Critical patent/CN112907607A/en
Application granted granted Critical
Publication of CN112907607B publication Critical patent/CN112907607B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a deep learning, target detection and semantic segmentation method based on differential attention, which comprises the following steps of 1, data set processing: changing data into a data structure required by a deep learning neural network; step 2, comprising: 2.1, building a traditional convolution neural network; 2.2 model improvement: adding a differential attention module after convolution, wherein the differential attention module is used for respectively paying attention to the change of the width direction and the height direction of the picture; and 3, training the model to obtain the convolutional neural network based on the differential attention mechanism. The invention overcomes the defect that the existing attention algorithm only utilizes the current characteristics, and applies the differential algorithm to the attention algorithm, so that the neural network is more sensitive to the change characteristics, texture characteristics and edge characteristics of the image or the characteristic diagram, and the expression capability of the neural network model in the fields of target detection, image segmentation, matting and the like is facilitated; meanwhile, compared with other attention algorithms, the method only adds a small amount of operation, and is very beneficial to implementation.

Description

Deep learning, target detection and semantic segmentation method based on differential attention
Technical Field
The invention relates to the technical field of computer vision, in particular to a deep learning, target detection and semantic segmentation method based on differential attention.
Background
The visual attention mechanism is a brain signal processing mechanism unique to human vision. Human vision obtains a target area requiring a great deal of attention, known as a focus of attention, by rapidly scanning a global image, and then puts more attention to this area.
The essence of the attention mechanism is to locate the information of interest, suppress the useless information, and the result is usually presented in the form of probability map or probability feature vector, and in principle, the attention mechanism is mainly divided into three types, namely a spatial attention model, a channel attention model and a spatial and channel mixed attention model.
The essence of spatial attention is to generate a multiplicative weight for each element through a series of operations or changes on the feature map, thereby locating the position of the target in space. The more well-known spatial attention algorithms include STN and the like.
The channel attention mechanism is that a bypass branch is separated after a normal convolution operation, a Squeeze operation is firstly performed (for example, for equal equalization of feature maps), and feature compression is performed on spatial dimensions, that is, each two-dimensional feature map becomes a real number, which is equivalent to a pooling operation with a global receptive field, and the number of feature channels is not changed. After the weight of each feature channel is obtained, the weight is applied to each original feature channel, and the importance of different channels can be learned based on a specific task.
The attention mechanism of spatial and channel mixing, such as CBAM, mainly adopts different ways to fuse the weights generated by the spatial attention algorithm and the weights generated by the channel attention algorithm.
Currently, the most applied self-attention algorithms in computer vision are SEnet, SKnet, CBAM, STN and DCAnet. Note that the force algorithm learns either the associations between channels or the associations between each tensor element, thereby increasing the long-range or overall correlation of the model at the channels or tensor elements.
Although self-attention mechanisms have shown good performance in many vision tasks, these consider only current features and capture poor differential or other changing information. Therefore, networks applying these attention algorithms may also perform poorly on edge features or other varying features.
Disclosure of Invention
The invention provides a deep learning method based on differential attention in order to solve the technical problems.
The invention is realized by the following technical scheme:
the deep learning method based on the differential attention comprises the following steps:
step 1, data set processing: changing data into a data structure required by a deep learning neural network;
step 2, constructing a deep learning neural network based on a differential attention mechanism, comprising the following steps:
2.1, building a traditional convolution neural network;
2.2 model improvement: adding a differential attention module after convolution, wherein the differential attention module is used for respectively paying attention to the change of the width direction and the height direction of the picture;
and 3, training the model to obtain the convolutional neural network based on the differential attention mechanism.
Wherein the differential attention module acts on the convolution generated higher-dimensional tensor.
Further, the differential attention module is divided into a first branch and a second branch, the first branch focuses on the change in the width direction, and the second branch focuses on the change in the height direction.
Further, the differential attention module works by the following mechanism:
s1, the tensors are processed simultaneously by two branches:
the first branch comprises the steps of:
a1, performing differential convolution operation in the w direction on a high-dimensional tensor with the dimensionality of c multiplied by w multiplied by h;
a2, performing row correlation operation on the tensor generated by A1 to generate a h multiplied by h correlation matrix;
a3, performing a scatter operation on the correlation matrix generated by A2 to obtain a one-dimensional vector, and generating a h multiplied by w vector through linear transformation;
the second branch comprises the steps of:
b1, performing differential convolution operation in the h direction on the high-dimensional tensor with the dimensionality of c multiplied by w multiplied by h;
b2, performing column correlation operation on the tensor generated by B1 to generate a correlation matrix of w multiplied by w;
b3, performing a scatter operation on the correlation matrix generated by B2 to obtain a one-dimensional vector, and generating an h multiplied by w vector through linear transformation;
and S2, summing the vectors finally generated on the two branches according to elements to obtain a c x (w x h) tensor: then, sigmoid operation is carried out on each element, and the value of each element is changed into a weight of [0,1 ];
s3, reshape operation is carried out on the c x (w x h) tensor to become a c x h x w tensor, and then bitwise multiplication operation is carried out on the tensor and the original tensor to complete the attention operation.
Further, a1 is specifically: firstly, initializing a differential convolution kernel with the dimensionality of c multiplied by 1 multiplied by 2 in the diagonal direction, wherein the weight of the differential convolution kernel is set to be [1, -1 ]; the tensors are then convolved to generate a c × w × h tensor.
Further, B1 specifically is: firstly, initializing a differential convolution kernel with one dimension being c multiplied by 2 multiplied by 1 in diagonal direction, wherein the weight of the differential convolution kernel is set as [1, -1 ]; the tensors are then convolved to generate a c × w × h tensor.
The target detection method based on the differential attention mechanism comprises the following steps:
step 1, making a data set: collecting data containing a target to be detected in different environments, and then labeling the target to be detected in a data set to form a labeling frame;
step 2, data set processing: changing data into a data structure required by a deep learning neural network;
step 3, constructing a target detection model based on a differential attention mechanism, comprising the following steps:
3.1, building a traditional ssd detection model;
3.2 model improvement: adding a differential attention module as claimed in any one of claims 1-6 after convolution;
and 3, training the model to obtain the target detection model based on the differential attention mechanism.
Preferably, the ssd detection model in step 3.1 is the ssd detection network with vgg16 being a backbone.
The semantic segmentation method based on the differential attention mechanism comprises the following steps of:
step 1, making a data set: collecting data containing a target to be segmented under different environments, and then labeling the target to be segmented in a data set to form mask labeling information;
step 2, data set processing: changing data into a data structure required by a deep learning neural network;
step 3, constructing a semantic segmentation model based on a differential attention mechanism, comprising the following steps:
3.1, building a traditional semantic segmentation model;
3.2 model improvement: adding a differential attention module as claimed in any one of claims 1-6 after convolution;
and 3, training the model to obtain a semantic segmentation model based on a differential attention mechanism.
Preferably, the semantic segmentation model in step 3.1 is Unet.
Compared with the prior art, the invention has the following beneficial effects:
the invention overcomes the defect that the existing attention algorithm only utilizes the current characteristics, and applies the differential algorithm to the attention algorithm, so that the neural network is more sensitive to the change characteristics, texture characteristics and edge characteristics of the image or the characteristic diagram, and the expression capability of the neural network model in the fields of target detection, image segmentation, matting and the like is facilitated; meanwhile, compared with other attention algorithms, the method only adds a small amount of operation, and is very beneficial to implementation.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this application, illustrate embodiment(s) of the invention and together with the description serve to explain the principles of the invention.
FIG. 1 is a schematic diagram of a convolution module with the addition of a differential attention module;
FIG. 2 is a schematic diagram of the operation of the differential attention module;
FIG. 3 is a schematic diagram of the ssd detection network with vgg16 being a backhaul in example 1;
fig. 4 is a schematic diagram of a semantic segmentation network based on the Unet network in embodiment 2.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to examples and accompanying drawings, and the exemplary embodiments and descriptions thereof are only used for explaining the present invention and are not meant to limit the present invention.
The differential attention algorithm adopted by the invention is similar to the application method of other attention mechanism algorithms, namely, an attention module is added after convolution to increase the attention of the model to the edge information and the change information, as shown in fig. 1.
The difference attention module mainly acts on a high-dimensional tensor generated by convolution, and the difference attention module is added to the high-dimensional tensor, so that the algorithm model can be more sensitive to variation information of the tensor, and edge information, texture information and variation information of an image can be captured more easily, and the expressive ability of the model in the directions of target detection, image segmentation and the like is improved.
As shown in FIG. 2, the differential attention algorithm of the present invention is primarily divided into two branches, one of which focuses on changes in the x-direction and the other focuses on changes in the y-direction. Wherein w is the width of the picture, and the x direction refers to the width direction; h is high and y direction refers to the height direction. The specific working mechanism is as follows:
1. firstly, initializing a differential convolution kernel with one dimension being c multiplied by w multiplied by 2 in a diagonal direction for a dimension being c multiplied by 1 multiplied by h high-dimensional tensor generated by each convolution (aiming at one branch), setting the weight, namely weight, of the convolution kernel of each channel as [1, -1], and then performing convolution on the tensor to generate a tensor of c multiplied by w multiplied by h;
2. because the convolution kernel only makes difference in the x direction and lacks correlation in the y direction, row correlation operation needs to be made on the tensor generated by the difference operation, that is, the tensor of each row and other rows make inner product, that is, ri×rjWhere i, j is ∈ [0, h ]]And generating a correlation matrix of h multiplied by h.
3. Performing a flatten operation on the h multiplied by h row correlation matrix to form a one-dimensional vector, and then learning the action of each element of the correlation matrix in the global characteristics through a linear function, so that the receptive field of the model is increased; the length of the vector is also changed to h × w.
4. The other branch is operated in the same steps of 1, 2 and 3, except that the dimension of a differential convolution kernel is c multiplied by 2 multiplied by 1, the weight is initialized to be [1, -1], meanwhile, the operation in the second step is changed into the operation of calculating the incidence matrix of each column and generating a correlation matrix of w multiplied by w, and then a vector of h multiplied by w is generated according to the operation in the 3 rd step;
5. and summing the final generated vectors on the two branches according to elements to finally obtain a c x (w x h) tensor, and then performing sigmoid operation on each element to change the value of each element into a weight of [0,1 ].
6. Reshape operation is carried out on the c x (w x h) tensor to form a c x h x w tensor, and bitwise multiplication operation is carried out on the c x (w x h) tensor and the original tensor to complete the attention operation.
Based on the above differential attention algorithm, the present invention discloses two embodiments.
Example 1
The embodiment discloses a target detection method based on a differential attention mechanism, which specifically comprises the following steps:
and S1, configuring a software environment required by deep learning.
S2, creating a data set:
collecting data containing a target to be detected in different environments, wherein the data can be from a network or a camera; and then, labeling the target to be detected in the data set to form a labeling frame.
S3, data processing: this step is mainly intended to form a data structure fed into the neural network, comprising the following steps:
s3.1 scaling: scaling an image to image data of 512x512 size;
s3.2 regularization: subtracting the mean value from the data of each pixel and dividing by the square difference can be implemented by equation (1):
Figure BDA0002977264680000051
in the formula (1), x is a pixel value of each pixel.
S3.3 transforms the image data into a Tensor structure (Tensor) required by the network.
S4, designing a target detection model based on the differential attention mechanism, comprising the following steps:
s4.1 network architecture: in the embodiment, the ssd detection network with vgg16 as the backhaul is improved, and the network structure is mainly as shown in fig. 1, and conv-1 to conv-8 in fig. 1 are stacked convolution modules.
S4.2 model improvement: in this embodiment, the convolution model is improved by using the above-mentioned differential attention module, and the general convolution module is changed into the convolution with the differential attention module added as shown in fig. 1.
S5, model training: and (5) sending the data into the neural network in batches to train the neural network.
S6, model test: and testing the performance of the model after the training is finished.
Example 2
The embodiment discloses a semantic segmentation method based on a differential attention mechanism, which specifically comprises the following steps:
and S1, configuring a software environment required by deep learning.
S2, creating a data set:
collecting data containing the target to be segmented in different environments, wherein the data can be from a network or a camera, and then labeling the target to be segmented in a data set to form mask labeling information.
S3, data processing: this step is mainly intended to form a data structure fed into the neural network, comprising the following steps:
s3.1 scaling: scaling an image to image data of 512x512 size;
s3.2 regularization: subtracting the mean value from the data of each pixel and dividing the data by the square difference, wherein the method can be realized by adopting a formula (1);
s3.3 transforms the image data into a Tensor structure (Tensor) required by the network.
S4, designing a semantic segmentation model based on the differential attention mechanism, comprising the following steps:
s4.1 network architecture: in this embodiment, the network structure of the improved underlying network with the Unet as semantic segmentation is shown in fig. 4:
s4.2 model improvement: the convolution (i.e. conv3x3 in fig. 4) is replaced by the convolution with the above-mentioned differential attention module (as shown in fig. 1), so as to reach the expression capability of the enhancement network to the edge information.
S5, model training: and (5) sending the data into the neural network in batches to train the neural network.
S6, model test: and testing the performance of the model after the training is finished.
The invention overcomes the defect that the existing attention algorithm only utilizes the current characteristics, and applies the differential algorithm to the attention algorithm, so that the neural network is more sensitive to the change characteristics, texture characteristics and edge characteristics of the image or the characteristic diagram, and the expression capability of the neural network model in the fields of target detection, image segmentation, matting and the like is facilitated; meanwhile, compared with other attention algorithms, the method only adds a small amount of operation, and is very beneficial to implementation.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. The deep learning method based on the differential attention is characterized in that: the method comprises the following steps:
step 1, building a traditional convolution neural network;
step 2, model improvement: after convolution, a differential attention module is added, and the differential attention module is used for respectively focusing on the change of the width direction and the height direction of the picture.
2. The differential attention-based deep learning method according to claim 1, characterized in that: the differential attention module acts on the convolution-generated higher-dimensional tensor.
3. The differential attention-based deep learning method according to claim 2, characterized in that: the differential attention module is divided into a first branch that focuses on the change in the width direction and a second branch that focuses on the change in the height direction.
4. The differential attention-based deep learning method according to claim 3, characterized in that: the working mechanism of the differential attention module is as follows:
s1, the tensors are processed simultaneously by two branches:
the first branch comprises the steps of:
a1, pair dimension of
Figure DEST_PATH_IMAGE002
Is made of a high-dimensional tensor
Figure DEST_PATH_IMAGE004
Performing directional differential convolution operation;
a2, performing row correlation operation on the tensor generated by A1 to generate
Figure DEST_PATH_IMAGE006
The correlation matrix of (a);
a3, performing a scatter operation on the correlation matrix generated by the A2 to obtain a one-dimensional vector, and generating the one-dimensional vector through linear transformation
Figure DEST_PATH_IMAGE008
The vector of (a);
the second branch comprises the steps of:
b1, pair dimension of
Figure 972195DEST_PATH_IMAGE002
Is made of a high-dimensional tensor
Figure DEST_PATH_IMAGE010
Performing directional differential convolution operation;
b2, performing column correlation operation on the tensor generated by B1 to generate
Figure DEST_PATH_IMAGE012
The correlation matrix of (a);
b3, performing a scatter operation on the correlation matrix generated by B2 to obtain a one-dimensional vector, and generating the one-dimensional vector through linear transformation
Figure 967964DEST_PATH_IMAGE008
The vector of (a);
s2, summing the vectors generated finally on the two branches according to elements to obtain
Figure DEST_PATH_IMAGE014
Tensor: then, sigmoid operation is carried out on each element, and the value of each element is changed into [0,1]A weight of (a);
s3, processing
Figure 544439DEST_PATH_IMAGE014
Is subjected to reshape operation and becomes one
Figure DEST_PATH_IMAGE016
Then, performing bitwise multiplication operation on the tensor and the original tensor to complete the attention operation.
5. Differential attention-based deep learning method according to claim 4The method is characterized in that: the A1 is specifically as follows: first initialize a dimension of
Figure DEST_PATH_IMAGE018
The weights of the differential convolution kernels are set to 1, -1](ii) a Then convolving the tensor to generate
Figure 18276DEST_PATH_IMAGE002
The tensor of (a).
6. The differential attention-based deep learning method according to claim 4 or 5, characterized in that: the B1 is specifically as follows: first initialize a dimension of
Figure DEST_PATH_IMAGE020
A differential convolution kernel in the diagonal direction, the weight of the differential convolution kernel being set to [1, -1](ii) a Then convolving the tensor to generate
Figure 629386DEST_PATH_IMAGE002
The tensor of (a).
7. The target detection method based on the differential attention mechanism is characterized in that: the method comprises the following steps:
step 1, making a data set: collecting data containing a target to be detected in different environments, and then labeling the target to be detected in a data set to form a labeling frame;
step 2, data set processing: changing data into a data structure required by a deep learning neural network;
step 3, constructing a target detection model based on a differential attention mechanism, comprising the following steps:
3.1, building a traditional ssd detection model;
3.2 model improvement: adding a differential attention module as claimed in any one of claims 1-6 after convolution;
and 3, training the model to obtain the target detection model based on the differential attention mechanism.
8. The differential attention mechanism-based target detection method of claim 7, wherein: the ssd detection model in step 3.1 is an ssd detection network with vgg16 as a backbone.
9. The semantic segmentation method based on the differential attention mechanism is characterized by comprising the following steps of: the method comprises the following steps:
step 1, making a data set: collecting data containing a target to be segmented under different environments, and then labeling the target to be segmented in a data set to form mask labeling information;
step 2, data set processing: changing data into a data structure required by a deep learning neural network;
step 3, constructing a semantic segmentation model based on a differential attention mechanism, comprising the following steps:
3.1, building a traditional semantic segmentation model;
3.2 model improvement: adding a differential attention module as claimed in any one of claims 1-6 after convolution;
and 3, training the model to obtain a semantic segmentation model based on a differential attention mechanism.
10. The differential attention mechanism-based semantic segmentation method according to claim 9, characterized in that: the semantic segmentation model in step 3.1 is Unet.
CN202110277583.2A 2021-03-15 2021-03-15 Deep learning, target detection and semantic segmentation method based on differential attention Active CN112907607B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110277583.2A CN112907607B (en) 2021-03-15 2021-03-15 Deep learning, target detection and semantic segmentation method based on differential attention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110277583.2A CN112907607B (en) 2021-03-15 2021-03-15 Deep learning, target detection and semantic segmentation method based on differential attention

Publications (2)

Publication Number Publication Date
CN112907607A true CN112907607A (en) 2021-06-04
CN112907607B CN112907607B (en) 2024-06-18

Family

ID=76105698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110277583.2A Active CN112907607B (en) 2021-03-15 2021-03-15 Deep learning, target detection and semantic segmentation method based on differential attention

Country Status (1)

Country Link
CN (1) CN112907607B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197162A (en) * 2023-09-27 2023-12-08 东北林业大学 Intracranial hemorrhage CT image segmentation method based on differential convolution

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4764971A (en) * 1985-11-25 1988-08-16 Eastman Kodak Company Image processing method including image segmentation
CN108830157A (en) * 2018-05-15 2018-11-16 华北电力大学(保定) Human bodys' response method based on attention mechanism and 3D convolutional neural networks
CN109685831A (en) * 2018-12-20 2019-04-26 山东大学 Method for tracking target and system based on residual error layering attention and correlation filter
CN110442723A (en) * 2019-08-14 2019-11-12 山东大学 A method of multi-tag text classification is used for based on the Co-Attention model that multistep differentiates
CN111858989A (en) * 2020-06-09 2020-10-30 西安工程大学 Image classification method of pulse convolution neural network based on attention mechanism
CN112016400A (en) * 2020-08-04 2020-12-01 香港理工大学深圳研究院 Single-class target detection method and device based on deep learning and storage medium
CN112069868A (en) * 2020-06-28 2020-12-11 南京信息工程大学 Unmanned aerial vehicle real-time vehicle detection method based on convolutional neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4764971A (en) * 1985-11-25 1988-08-16 Eastman Kodak Company Image processing method including image segmentation
CN108830157A (en) * 2018-05-15 2018-11-16 华北电力大学(保定) Human bodys' response method based on attention mechanism and 3D convolutional neural networks
CN109685831A (en) * 2018-12-20 2019-04-26 山东大学 Method for tracking target and system based on residual error layering attention and correlation filter
CN110442723A (en) * 2019-08-14 2019-11-12 山东大学 A method of multi-tag text classification is used for based on the Co-Attention model that multistep differentiates
CN111858989A (en) * 2020-06-09 2020-10-30 西安工程大学 Image classification method of pulse convolution neural network based on attention mechanism
CN112069868A (en) * 2020-06-28 2020-12-11 南京信息工程大学 Unmanned aerial vehicle real-time vehicle detection method based on convolutional neural network
CN112016400A (en) * 2020-08-04 2020-12-01 香港理工大学深圳研究院 Single-class target detection method and device based on deep learning and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李超, 黄建忠, 项攀攀: "差分分析在序列密码攻击中的应用", 应用科学学报, no. 02, 20 June 2004 (2004-06-20), pages 1 - 5 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117197162A (en) * 2023-09-27 2023-12-08 东北林业大学 Intracranial hemorrhage CT image segmentation method based on differential convolution
CN117197162B (en) * 2023-09-27 2024-04-09 东北林业大学 Intracranial hemorrhage CT image segmentation method based on differential convolution

Also Published As

Publication number Publication date
CN112907607B (en) 2024-06-18

Similar Documents

Publication Publication Date Title
Li et al. Efficient and interpretable deep blind image deblurring via algorithm unrolling
CN109584248B (en) Infrared target instance segmentation method based on feature fusion and dense connection network
Guo et al. Multiview high dynamic range image synthesis using fuzzy broad learning system
CN112308200B (en) Searching method and device for neural network
CN112418074A (en) Coupled posture face recognition method based on self-attention
CN113673590B (en) Rain removing method, system and medium based on multi-scale hourglass dense connection network
Rehman et al. Face recognition: A novel un-supervised convolutional neural network method
CN107154023A (en) Face super-resolution reconstruction method based on generation confrontation network and sub-pix convolution
CN113344806A (en) Image defogging method and system based on global feature fusion attention network
CN112131943A (en) Video behavior identification method and system based on dual attention model
CN112862792B (en) Wheat powdery mildew spore segmentation method for small sample image dataset
CN113052185A (en) Small sample target detection method based on fast R-CNN
CN111583263A (en) Point cloud segmentation method based on joint dynamic graph convolution
CN111160217B (en) Method and system for generating countermeasure sample of pedestrian re-recognition system
CN108520203B (en) Multi-target feature extraction method based on fusion of self-adaptive multi-peripheral frame and cross pooling feature
Fan et al. Multi-scale depth information fusion network for image dehazing
CN110929685A (en) Pedestrian detection network structure based on mixed feature pyramid and mixed expansion convolution
CN113449612B (en) Three-dimensional target point cloud identification method based on sub-flow sparse convolution
CN116229178B (en) Image classification method for small quantity of training samples based on Transformer
Greco et al. Benchmarking deep networks for facial emotion recognition in the wild
CN112084952B (en) Video point location tracking method based on self-supervision training
CN111860124A (en) Remote sensing image classification method based on space spectrum capsule generation countermeasure network
Yeswanth et al. Residual skip network-based super-resolution for leaf disease detection of grape plant
CN116091946A (en) Yolov 5-based unmanned aerial vehicle aerial image target detection method
CN112907607B (en) Deep learning, target detection and semantic segmentation method based on differential attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant