WO2021218037A1 - 目标检测方法、装置、计算机设备和存储介质 - Google Patents

目标检测方法、装置、计算机设备和存储介质 Download PDF

Info

Publication number
WO2021218037A1
WO2021218037A1 PCT/CN2020/119710 CN2020119710W WO2021218037A1 WO 2021218037 A1 WO2021218037 A1 WO 2021218037A1 CN 2020119710 W CN2020119710 W CN 2020119710W WO 2021218037 A1 WO2021218037 A1 WO 2021218037A1
Authority
WO
WIPO (PCT)
Prior art keywords
features
group
feature
network
feature fusion
Prior art date
Application number
PCT/CN2020/119710
Other languages
English (en)
French (fr)
Inventor
李楚
陈泽
陈岩
王志成
Original Assignee
北京迈格威科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京迈格威科技有限公司 filed Critical 北京迈格威科技有限公司
Publication of WO2021218037A1 publication Critical patent/WO2021218037A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • the present disclosure relates to the technical field of image data processing, and in particular to a target detection method, device, computer equipment and storage medium.
  • target detection as one of the most basic and primary tasks in computer vision, is widely used in various aspects of industry and daily life, such as autonomous driving, security monitoring, and gaming and entertainment.
  • the target detection method first obtains feature maps of multiple scales by convolution processing on the image; then, convolution and interpolation processing are performed on the feature maps of each scale, and the feature maps of the previous scale are combined. Perform superposition to obtain the fused feature map of the previous scale; next, input the fused feature map of each scale into the detection network to obtain the target detection result.
  • a target detection method includes:
  • n is an integer greater than 1;
  • the n groups of first features of different scales are input into a first feature fusion network, and the first feature fusion network includes n feature fusion layers.
  • the nth feature fusion layer the nth group of first features is taken as the first feature fusion network.
  • the i-1th feature fusion layer obtain the weight parameters corresponding to the i-th group of second features and the i-th group of second features, multiply the i-th group of second features and the weight parameters, and Fuse the multiplied features with the first feature of the i-1th group to obtain the second feature of the i-1th group, until the second feature of the first group is obtained;
  • the n sets of second features are input into the detection network to obtain category information and location information of the target in the image to be detected.
  • obtaining the weight parameter corresponding to the second feature of the i-th group includes:
  • the pooled features are input into a fully connected network to obtain the weight parameters corresponding to the i-th group of second features.
  • multiplying the i-th group of second features by the weight parameter includes:
  • the convolutional feature is multiplied by the weight parameter to obtain the multiplied feature.
  • the weight parameters corresponding to the i-th group of second features and the i-th group of second features are obtained, and the i-th group of second features are combined with all the Multiply the weight parameters, and fuse the multiplied features with the first feature of the i-1th group to obtain the second feature of the i-1th group, including:
  • the i-1th feature fusion layer obtain the i-th group of second features and the weight parameters corresponding to the i-th group of second features, and multiply the i-th group of second features with the weight parameters to obtain Features after multiplying;
  • obtaining the weight parameter corresponding to the up-sampled feature includes:
  • the pooled features are input into a fully connected network, and the weight parameters corresponding to the up-sampled features are obtained.
  • the n-th group of first features is used as the n-th group of second features, including:
  • the pooled features are added to the n-th group of first features to obtain the n-th group of second features.
  • inputting n sets of second features into the detection network to obtain the category information and location information of the target in the image to be detected includes:
  • the second feature fusion network includes n feature fusion layers, and in the first feature fusion layer, the first group of second features are taken as the first group of third features ;
  • the i-th feature fusion layer obtain the i-1th group of third features, and fuse the i-1th group of third features with the i-th group of second features to obtain the i-th group of third features, until the The third feature of the nth group;
  • inputting n sets of second features into the detection network to obtain the category information and location information of the target in the image to be detected includes:
  • the initial candidate frame is input into a cascaded detection network, which includes m cascaded detection sub-networks.
  • the initial candidate frame is pooled in the original feature and the region of interest is pooled.
  • the features of is input into the first-level detection sub-network, and the first-level detection frame and confidence level are obtained;
  • For the j-1 level detection frame perform the region of interest pooling operation on the original features, and input the pooled features into the j level detection sub-network to obtain the j level detection frame and the confidence level, until the first level m-level detection frame and confidence as the final result;
  • Non-maximum suppression is performed on the final result to obtain category information and position information of the target in the image to be detected.
  • a target detection device the device includes:
  • the feature extraction module is used to perform feature extraction on the image to be detected to obtain n groups of first features of different scales, where n is an integer greater than 1;
  • the feature fusion module is used to input the n groups of first features of different scales into a first feature fusion network.
  • the first feature fusion network includes n feature fusion layers. In the nth feature fusion layer, the nth The first feature of the group is regarded as the second feature of the nth group;
  • the feature fusion module is also used to obtain the weight parameters corresponding to the i-th group of second features and the i-th group of second features in the i-1th feature fusion layer, and combine the i-th group of second features Multiply the weight parameter, and fuse the multiplied features with the first feature of the i-1th group to obtain the second feature of the i-1th group, until the second feature of the first group is obtained;
  • the detection module is used to input the n groups of second features into the detection network to obtain the category information and location information of the target in the image to be detected.
  • a computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when the processor executes the computer program:
  • n is an integer greater than 1;
  • the n groups of first features of different scales are input into a first feature fusion network, and the first feature fusion network includes n feature fusion layers.
  • the nth feature fusion layer the nth group of first features is taken as the first feature fusion network.
  • the i-1th feature fusion layer obtain the weight parameters corresponding to the i-th group of second features and the i-th group of second features, multiply the i-th group of second features and the weight parameters, and Fuse the multiplied features with the first feature of the i-1th group to obtain the second feature of the i-1th group, until the second feature of the first group is obtained;
  • the n sets of second features are input into the detection network to obtain category information and location information of the target in the image to be detected.
  • a computer-readable storage medium having a computer program stored thereon, and when the computer program is executed by a processor, the following steps are implemented:
  • n is an integer greater than 1;
  • the n groups of first features of different scales are input into a first feature fusion network, and the first feature fusion network includes n feature fusion layers.
  • the nth feature fusion layer the nth group of first features is taken as the first feature fusion network.
  • the i-1th feature fusion layer obtain the weight parameters corresponding to the i-th group of second features and the i-th group of second features, multiply the i-th group of second features and the weight parameters, and Fuse the multiplied features with the first feature of the i-1th group to obtain the second feature of the i-1th group, until the second feature of the first group is obtained;
  • the n sets of second features are input into the detection network to obtain category information and location information of the target in the image to be detected.
  • the weight parameter corresponding to the second feature is obtained, and a series of calculations are performed on the second feature and its corresponding weight parameter.
  • the choice of the two features can achieve the effect of selectively fusing the second feature with the next first feature, which can more effectively combine the feature information of features of different scales, which is beneficial to improve the accuracy of target detection.
  • FIG. 1 is a schematic flowchart of a target detection method in an embodiment
  • FIG. 2 is a schematic flowchart of a supplementary solution for obtaining weight parameters corresponding to the second feature of the i-th group in an embodiment
  • Fig. 3 is a schematic flowchart of a supplementary solution for multiplying the i-th group of second features by weight parameters in an embodiment
  • FIG. 4 is a schematic flowchart of a supplementary solution for determining the second feature of the i-1th group in an embodiment
  • FIG. 5 is a schematic flowchart of a supplementary solution for obtaining weight parameters corresponding to up-sampled features in an embodiment
  • FIG. 6 is a schematic flowchart of a supplementary solution for inputting n sets of second features into the detection network to obtain the category information and location information of the target in the image to be detected in an embodiment
  • Figure 7 is a structural block diagram of a target detection device in an embodiment
  • Fig. 8 is an internal structure diagram of a computer device in an embodiment.
  • the target detection method involved in the present disclosure is applied to a target detection device for illustration.
  • the target detection device may be a terminal, a server, or a system including a terminal and a server, and is implemented through interaction between the terminal and the server.
  • the terminal can be, but is not limited to, various personal computers, notebook computers, smart phones, tablet computers, and portable wearable devices.
  • the server can be implemented by an independent server or a server cluster composed of multiple servers.
  • a target detection method is provided, which can be specifically implemented through the following steps:
  • Step S202 Perform feature extraction on the image to be detected to obtain n groups of first features with different scales.
  • n is an integer greater than 1.
  • the image to be detected is input into a target detection device, and the target detection device performs multiple feature extractions of different scales on the image to be detected to obtain n sets of first features of different scales.
  • the first feature can be composed of a three-dimensional tensor.
  • First features of different scales contain different feature information. For example, some first feature maps have rich semantic information, and some first feature maps have rich spatial information.
  • the target detection device may use a neural network backbone network to perform multi-scale feature extraction on the image to be detected.
  • the neural network may be a convolutional neural network.
  • a network such as VGG16 or ResNet is used to extract features in the image to be detected to obtain multiple sets of first features with different scales.
  • Step S204 input n groups of first features of different scales into a first feature fusion network, the first feature fusion network includes n feature fusion layers, and in the nth feature fusion layer, the nth group of first features is taken as the first feature fusion network. N sets of second features.
  • the target detection device inputs the obtained n sets of first features of different scales into a preset first feature fusion network, and performs feature fusion through n feature fusion layers included in the first feature fusion network.
  • the target detection device needs to use the n-th group of first features as the n-th group of second features.
  • the n-th group of second features may be the n-th group of first features, that is, different terms corresponding to the same feature under different functions.
  • the n-th group of second features may also be obtained by further processing of the n-th group of first features. In this case, the n-th group of second features and the n-th group of first features are not the same feature.
  • the first feature of the nth group is usually the first feature with the smallest scale, so in implementation, the target detection device can determine the first feature with the smallest scale as the first feature of the nth group according to the scale of the first feature. feature.
  • Step S206 in the i-1th feature fusion layer, obtain the weight parameters corresponding to the i-th group of second features and the i-th group of second features, multiply the i-th group of second features and the weight parameters, and multiply them
  • the obtained features are fused with the first feature of the i-1th group to obtain the second feature of the i-1th group, until the second feature of the first group is obtained.
  • the sequence from n to 1 is used to merge adjacent features.
  • the features that need to be fused come from the multiplication of the weight parameters corresponding to the i-th group of second features and the i-th group of second features on the one hand, and on the other hand From the first feature of the i-1th group, after obtaining these two feature data, complete the fusion process of the feature obtained by the multiplication and the first feature of the i-1th group in the i-1th feature fusion layer, and Obtain the second feature of the i-1th group.
  • the target detection device adds or splices the multiplied features with the i-1th group of first features to obtain the i-1th group of second features.
  • the weight parameter may be preset, or may be obtained by further processing according to each second feature. It should be mentioned that the weight parameter is mainly used to realize the selection of each second feature, reduce the amount of data calculation and improve the effectiveness of feature fusion. For example, when the weight parameter is zero or negative, by multiplying the weight parameter with the second feature, the feature can be selected by judging whether the product is a positive number.
  • Step S208 Input the n sets of second features into the detection network to obtain category information and location information of the target in the image to be detected.
  • the target detection device inputs the n sets of second features into the detection network to obtain category information and location information of the target in the image to be detected.
  • the target detection device inputs the n sets of second features into the Faster R-CNN network or the Cascade RCNN cascade network, and finally outputs the category information and location information of the target in the image to be detected.
  • the weight parameter corresponding to the second feature is obtained, and a series of calculations are performed on the second feature and its corresponding weight parameter to realize the selection of the second feature to achieve The effect of selectively fusing the second feature with the next first feature, so that the feature information of features of different scales can be more effectively combined, which is beneficial to improve the accuracy of target detection.
  • the target detection device reduces the dimension of the i-th group of second features to obtain the reduced feature .
  • the target detection device inputs the reduced-dimensionality features into the fully connected network to obtain the weight parameters corresponding to the second feature in the i-th group.
  • the target detection device performs a pooling operation on the i-th group of second features to obtain pooled features, that is, features after dimensionality reduction.
  • the target detection device performs global average pooling on the i-th group of second features to obtain the pooled features.
  • the target detection device performs global maximum pooling on the i-th group of second features to obtain the pooled features. It can be seen that there are multiple implementation manners for obtaining the weight parameter corresponding to the second feature of the i-th group, and this embodiment is not limited to the implementation manners listed above.
  • global average pooling is performed on the second feature and its corresponding weight parameter is obtained through the fully connected network processing, which can enhance the correlation between the weight parameter and the second feature, so that the weight parameter can be more accurate Feature selection.
  • multiplying the second feature of the i-th group by the weight parameter can be specifically implemented by the following steps:
  • the target detection device performs a convolution operation on the i-th group of second features to obtain the convolved features.
  • the target detection device multiplies the convolved feature with the weight parameter to obtain the multiplied feature.
  • the second feature is selected by the multiplication method, which is beneficial to improve the accuracy of the feature selection.
  • step S206 can be specifically implemented through the following steps:
  • the target detection device after obtaining the multiplied feature, performs the multiplied feature Up-sampling, to obtain the up-sampled features, the purpose of up-sampling is to increase the multiplied features of the smaller scale to the size of the first feature in the i-1th group, so as to facilitate the fusion of the features corresponding to the location.
  • Each feature fusion can be seen as a door structure (door) to control the features that can be fused and improve the effectiveness of the fusion.
  • a gate structure method is adopted to selectively fuse features, so that target detection is performed based on the fused features, which is beneficial to improve the accuracy of target detection.
  • obtaining the weight parameters corresponding to the up-sampled features can be specifically implemented by the following steps:
  • S206a Perform global average pooling on the up-sampled features to obtain pooled features
  • S206b Input the pooled features into the fully connected network to obtain the weight parameters corresponding to the up-sampled features.
  • the target detection device reduces the dimensionality of the up-sampled features to obtain the reduced-dimensionality feature.
  • the target detection device inputs the reduced-dimensional features into the fully connected network to obtain the weight parameters corresponding to the up-sampled features.
  • the target detection device performs a pooling operation on the up-sampled features to obtain the pooled features, that is, the dimensionality-reduced feature.
  • the target detection device performs global average pooling on the up-sampled features to obtain the pooled features.
  • the target detection device performs global maximum pooling on the up-sampled features to obtain the pooled features. It can be seen that there are multiple implementation manners for obtaining the weight parameters corresponding to the up-sampled features, and this embodiment is not limited to the implementation manners listed above.
  • global average pooling is performed on the up-sampled features and the corresponding weight parameters are obtained through the fully connected network processing, which can enhance the correlation between the weight parameters and the up-sampled features, so that the weight parameters can be Choose features more accurately.
  • the n-th group of first features are used as the n-th group of second features, which can be specifically implemented by the following steps:
  • Step S232 Perform global tie pooling on the n-th group of first features to obtain pooled features
  • step S234 the pooled features are added to the n-th group of first features to obtain the n-th group of second features.
  • the target detection device uses a broadcast mechanism (broadcast) to make it N*256*H*W, that is, the pixel value on the same H*W is the same, and then it is added to the first feature with the smallest scale to obtain the second Features (the second feature of the nth group).
  • the implementation of the addition can be: assuming that the dimension of the first feature with the smallest scale is N*C*H*W, input it into a 1*1 convolutional network, the number of transformable channels is 256, that is, the dimension becomes N *256*H*W. At this time, the first feature with the same dimension is added to the pooled feature to obtain the nth group of second features.
  • the structure of the entire network can be regularized to prevent overfitting, which is beneficial to improve the accuracy of target detection.
  • step S208 can be specifically implemented through the following steps:
  • S2082 Input n sets of second features into a second feature fusion network, the second feature fusion network includes n feature fusion layers, and in the first feature fusion layer, the first group of second features is taken as the first group of third features feature;
  • S2086 Input the n groups of third features into the detection network to obtain category information and location information of the target in the image to be detected.
  • the target detection device inputs n sets of second features into the second feature fusion network, the second feature fusion network includes n feature fusion layers, and in the first feature fusion layer, the first set of second features is taken as the first feature fusion layer. 1 set of third features.
  • the target detection device obtains the i-1th group of third features in the i-th feature fusion layer, and fuses the i-1th group of third features with the i-th group of second features to obtain the i-th group of third features Feature until the third feature of the nth group is obtained.
  • the target detection device inputs the n groups of third features into the detection network to obtain category information and location information of the target in the image to be detected.
  • the semantic information of the features can be enhanced, and the detection accuracy of small-scale targets can be improved.
  • step S208 may be specifically implemented through the following steps:
  • S208b Input the initial candidate frame into a cascaded detection network, which includes m cascaded detection sub-networks, perform a region of interest pooling operation on the original feature of the initial candidate frame, and input the pooled feature
  • the first-level detection sub-network obtains the first-level detection frame and confidence level
  • S208c for the detection frame at level j-1, perform the region of interest pooling operation on the original features, and input the pooled features into the detection sub-network at level j to obtain the detection frame at level j and the confidence level until Obtain the m-th detection frame and confidence level as the final result;
  • S208d Perform non-maximum suppression on the final result to obtain category information and position information of the target in the image to be detected.
  • the target detection device generates a network from n sets of second feature input regions to obtain the initial candidate frame B0.
  • the target detection device adopts m cascaded detection sub-networks, pools the initial candidate frame on the original features, and inputs the pooled features into the first-level detection sub-network to obtain the first Level of detection frame and confidence level.
  • the target detection device performs the region of interest pooling operation on the original features, and inputs the pooled features into the detection sub-network of level j to obtain the detection frame of level j and Confidence, until the m-th detection frame and confidence are obtained as the final result.
  • the target detection device performs non-maximum suppression on the final result to obtain the category information and position information of the target in the image to be detected.
  • a target detection device including: a feature extraction module 302, a feature fusion module 304, and a detection module 306, wherein:
  • the feature extraction module 302 is configured to perform feature extraction on the image to be detected to obtain n sets of first features of different scales, where n is an integer greater than 1;
  • the feature fusion module 304 is used for inputting n groups of first features of different scales into the first feature fusion network.
  • the first feature fusion network includes n feature fusion layers. In the nth feature fusion layer, the nth group of first features Feature as the second feature of the nth group;
  • the feature fusion module 304 is also used to obtain the weight parameters corresponding to the i-th group of second features and the i-th group of second features in the i-1th feature fusion layer, and multiply the i-th group of second features by the weight parameters , And fuse the multiplied features with the first feature of the i-1th group to obtain the second feature of the i-1th group, until the second feature of the first group is obtained;
  • the detection module 306 is used to input n sets of second features into the detection network to obtain category information and location information of the target in the image to be detected.
  • the second feature when fusing the features, by acquiring the weight parameter corresponding to the second feature, and performing a series of calculations on the second feature and its corresponding weight parameter, the second feature can be selected to achieve The effect of selectively fusing the second feature with the next first feature, so that the feature information of features of different scales can be more effectively combined, which is beneficial to improve the accuracy of target detection.
  • the feature fusion module 304 is specifically configured to perform global average pooling on the i-th group of second features to obtain the pooled features; input the pooled features into the fully connected network to obtain the i-th group of second features.
  • the weight parameter corresponding to the second feature of the group is specifically configured to perform global average pooling on the i-th group of second features to obtain the pooled features; input the pooled features into the fully connected network to obtain the i-th group of second features.
  • the weight parameter corresponding to the second feature of the group is specifically configured to perform global average pooling on the i-th group of second features to obtain the pooled features; input the pooled features into the fully connected network to obtain the i-th group of second features.
  • the feature fusion module 304 is specifically configured to perform a convolution operation on the i-th group of second features to obtain convolved features; multiply the convolved features with the weight parameter to obtain the multiplication After the characteristics.
  • the feature fusion module 304 is specifically configured to obtain the second feature of the i-th group and the weight parameters corresponding to the second feature of the i-th group in the i-1th feature fusion layer, and combine the i-th group
  • the second feature is multiplied by the weight parameter to obtain the multiplied feature
  • the multiplied feature is up-sampled to obtain the up-sampled feature
  • the weight parameter corresponding to the up-sampled feature is obtained
  • the up-sampled feature is obtained Multiply the weight parameter, and fuse the multiplied feature with the first feature of the i-1th group to obtain the second feature of the i-1th group.
  • the feature fusion module 304 is specifically configured to perform global average pooling on the up-sampled features to obtain the pooled features; input the pooled features into the fully connected network to obtain the up-sampled features The corresponding weight parameter of the feature.
  • the feature fusion module 304 is specifically configured to perform global tie pooling on the n-th group of first features to obtain the pooled features; the pooled features are compared with the n-th group of first features. Add to get the second feature of the nth group.
  • the detection module 306 is specifically configured to input n sets of second features into the second feature fusion network.
  • the second feature fusion network includes n feature fusion layers.
  • the second feature of the first group is regarded as the third feature of the first group; in the i-th feature fusion layer, the third feature of the i-1th group is obtained, and the third feature of the i-1th group is combined with the second feature of the i-th group Fusion, obtain the i-th group of third features, until the n-th group of third features are obtained; input the n groups of third features into the detection network to obtain the category information and location information of the target in the image to be detected.
  • the detection module 306 is specifically configured to generate a network from n groups of second feature input regions to obtain an initial candidate frame; input the initial candidate frame into a cascaded detection network, and the detection network includes m cascades
  • the detection sub-network, the initial candidate frame is pooled on the original features, and the pooled features are input into the first-level detection sub-network to obtain the first-level detection frame and confidence; for the j-th Level 1 detection frame, perform the region of interest pooling operation on the original features, and input the pooled features into the j-th level detection sub-network to obtain the j-th level detection frame and confidence, until the m-th level detection frame is obtained And the confidence level is used as the final result; the final result is suppressed by non-maximum value, and the category information and position information of the target in the image to be detected are obtained.
  • Each module in the above-mentioned target detection device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device is provided.
  • the computer device may be a server, and its internal structure diagram may be as shown in FIG. 8.
  • the computer device 800 includes a processor 81, a memory, and a network interface 88 connected through a system bus 82.
  • the processor 81 of the computer device is used to provide calculation and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium 87 and an internal memory 86.
  • the non-volatile storage medium 87 stores an operating system 83, a computer program 84, and a database 85.
  • the internal memory 86 provides an environment for the operation of the operating system and computer programs in the non-volatile storage medium 87.
  • the network interface 88 of the computer device 800 is used to communicate with an external terminal through a network connection.
  • the computer program 84 is executed by the processor 81 to realize a target detection method.
  • FIG. 8 is only a block diagram of a part of the structure related to the solution of the present disclosure, and does not constitute a limitation on the computer device to which the solution of the present disclosure is applied.
  • the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • a computer device including a memory and a processor, where a computer program is stored in the memory, and the processor implements the steps in the foregoing method embodiments when the processor executes the computer program.
  • a computer-readable storage medium is provided, and a computer program is stored thereon, and when the computer program is executed by a processor, the steps in the foregoing method embodiments are implemented.
  • Non-volatile memory may include read-only memory (Read-Only Memory, ROM), magnetic tape, floppy disk, flash memory, or optical storage.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM may be in various forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本公开涉及一种目标检测方法、装置、计算机设备和存储介质。该方法包括:对待检测图像进行特征提取,得到n组不同尺度的第一特征,其中,n为大于1的整数;将n组不同尺度的第一特征输入第一特征融合网络,第一特征融合网络包括n个特征融合层,在第n个特征融合层中,将第n组第一特征作为第n组第二特征;在第i-1个特征融合层中,获取第i组第二特征以及第i组第二特征对应的权重参数,将第i组第二特征与权重参数相乘,并将相乘得到的特征与第i-1组第一特征融合,得到第i-1组第二特征,直至得到第1组第二特征;将n组第二特征输入检测网络,得到待检测图像中目标的类别信息和位置信息。该方法有利于提高目标检测的准确性。

Description

目标检测方法、装置、计算机设备和存储介质
相关申请的交叉引用
本公开要求于2020年04月29日提交的申请号为202010356470.7、名称为“目标检测方法、装置、计算机设备和存储介质”的中国专利申请的优先权,该中国专利申请的全部内容通过引用全部并入本文。
技术领域
本公开涉及图像数据处理技术领域,特别是涉及一种目标检测方法、装置、计算机设备和存储介质。
背景技术
随着人工智能技术的发展,目标检测作为计算机视觉中最基本和首要的任务之一,广泛应用于工业界和日常生活的各个方面,例如自动驾驶、安防监控以及游戏娱乐等领域。
传统技术中,目标检测方法先通过对图像进行卷积处理,得到多个尺度的特征图;接下来,对每个尺度的特征图进行卷积处理和插值处理,并与上一尺度的特征图进行叠加,得到上一尺度融合后的特征图;接下来,将每个尺度融合后的特征图输入检测网络,得到目标检测结果。
然而,在很多复杂场景中,例如多尺度变化场景,由于图像中目标的尺度变化很大,在此情形下采用传统的目标检测方法进行目标检测时,检测的准确性较低。
发明内容
基于此,有必要针对上述技术问题,提供一种能够提高目标检测准确性的目标检测方法、装置、计算机设备和存储介质。
一种目标检测方法,所述方法包括:
对待检测图像进行特征提取,得到n组不同尺度的第一特征,其中,n为大于1的整数;
将所述n组不同尺度的第一特征输入第一特征融合网络,所述第一特征融合网络包括n个特征融合层,在第n个特征融合层中,将第n组第一特征作为第n组第二特征;
在第i-1个特征融合层中,获取第i组第二特征以及所述第i组第二特征对应的权重参数,将所述第i组第二特征与所述权重参数相乘,并将相乘得到的特征与第i-1组第一特征融合,得到第i-1组第二特征,直至得到第1组第二特征;
将n组第二特征输入检测网络,得到所述待检测图像中目标的类别信息和位置信息。
在其中一个实施例中,获取第i组第二特征对应的权重参数,包括:
对所述第i组第二特征进行全局平均池化,得到池化后的特征;
将所述池化后的特征输入全连接网络,得到所述第i组第二特征对应的权重参数。
在其中一个实施例中,将所述第i组第二特征与所述权重参数相乘,包括:
将所述第i组第二特征进行卷积运算,得到卷积后的特征;
将所述卷积后的特征与所述权重参数相乘,得到相乘后的特征。
在其中一个实施例中,在第i-1个特征融合层中,获取第i组第二特征以及所述第i组第二特征对应的权重参数,将所述第i组第二特征与所述权重参数相乘,并将相乘得到的特征与第i-1组第一特征融合,得到第i-1组第二特征,包括:
在第i-1个特征融合层中,获取第i组第二特征以及所述第i组第二特征对应的权重参数,将所述第i组第二特征与所述权重参数相乘,得到相乘后的特征;
对所述相乘后的特征进行上采样,得到上采样后的特征;
获取所述上采样后的特征对应的权重参数,将所述上采样后的特征与所述权重参数相乘,并将相乘得到的特征与第i-1组第一特征融合,得到第i-1组第二特征。
在其中一个实施例中,获取所述上采样后的特征对应的权重参数,包括:
对所述上采样后的特征进行全局平均池化,得到池化后的特征;
将所述池化后的特征输入全连接网络,得到所述上采样后的特征对应的权重参数。
在其中一个实施例中,在第n个特征融合层中,将第n组第一特征作为第n组第二特征,包括:
对所述第n组第一特征进行全局平局池化,得到池化后的特征;
将所述池化后的特征与所述第n组第一特征相加,得到第n组第二特征。
在其中一个实施例中,将n组第二特征输入检测网络,得到所述待检测图像中目标的类别信息和位置信息,包括:
将n组第二特征输入第二特征融合网络,所述第二特征融合网络包括n个特征融合层,在第1个特征融合层中,将第1组第二特征作为第1组第三特征;
在第i个特征融合层中,获取第i-1组第三特征,并将所述第i-1组第三特征与第i组第二特征融合,得到第i组第三特征,直至得到第n组第三特征;
将n组第三特征输入检测网络,得到所述待检测图像中目标的类别信息和位置信息。
在其中一个实施例中,将n组第二特征输入检测网络,得到所述待检测图像中目标的类别信息和位置信息,包括:
将所述n组第二特征输入区域生成网络,得到初始候选框;
将所述初始候选框输入级联的检测网络,所述检测网络包括级联的m个检测子网络,将所述初始候选框在原始特征上进行感兴趣区域池化操作,并将池化后的特征输入第1级检测子网络,得到第1级的检测框及置信度;
对于第j-1级检测框,在原始特征上进行感兴趣区域池化操作,并将池化后的特征输入第j级检测子网络,得到第j级的检测框及置信度,直至得到第m级检测框及置信度作为最终结果;
对所述最终结果进行非极大值抑制,得到所述待检测图像中目标的类别信息和位置信息。
一种目标检测装置,所述装置包括:
特征提取模块,用于对待检测图像进行特征提取,得到n组不同尺度的第一特征,其中,n为大于1的整数;
特征融合模块,用于将所述n组不同尺度的第一特征输入第一特征融合网络,所述第一特征融合网络包括n个特征融合层,在第n个特征融合层中,将第n组第一特征作为第n组第二特征;
所述特征融合模块,还用于在第i-1个特征融合层中,获取第i组第二特征以及所述第i组第二特征对应的权重参数,将所述第i组第二特征与所述权重参数相乘,并将相乘得到的特征与第i-1组第一特征融合,得到第i-1组第二特征,直至得到第1组第二特征;
检测模块,用于将n组第二特征输入检测网络,得到所述待检测图像中目标的类别信息和位置信息。
一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
对待检测图像进行特征提取,得到n组不同尺度的第一特征,其中,n为大于1 的整数;
将所述n组不同尺度的第一特征输入第一特征融合网络,所述第一特征融合网络包括n个特征融合层,在第n个特征融合层中,将第n组第一特征作为第n组第二特征;
在第i-1个特征融合层中,获取第i组第二特征以及所述第i组第二特征对应的权重参数,将所述第i组第二特征与所述权重参数相乘,并将相乘得到的特征与第i-1组第一特征融合,得到第i-1组第二特征,直至得到第1组第二特征;
将n组第二特征输入检测网络,得到所述待检测图像中目标的类别信息和位置信息。
一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:
对待检测图像进行特征提取,得到n组不同尺度的第一特征,其中,n为大于1的整数;
将所述n组不同尺度的第一特征输入第一特征融合网络,所述第一特征融合网络包括n个特征融合层,在第n个特征融合层中,将第n组第一特征作为第n组第二特征;
在第i-1个特征融合层中,获取第i组第二特征以及所述第i组第二特征对应的权重参数,将所述第i组第二特征与所述权重参数相乘,并将相乘得到的特征与第i-1组第一特征融合,得到第i-1组第二特征,直至得到第1组第二特征;
将n组第二特征输入检测网络,得到所述待检测图像中目标的类别信息和位置信息。
上述目标检测方法、装置、计算机设备和存储介质,在对特征进行融合时,通过获取第二特征对应的权重参数,并将该第二特征与其对应的权重参数进行一系列运算,可实现对第二特征的取舍,以达到有选择地将第二特征与下一第一特征进行融合的效果,如此能够更加有效地结合不同尺度特征的特征信息,有利于提高目标检测的准确性。
附图说明
图1为一个实施例中目标检测方法的流程示意图;
图2为一个实施例中获取第i组第二特征对应的权重参数的补充方案的流程示意 图;
图3为一个实施例中将第i组第二特征与权重参数相乘的补充方案的流程示意图;
图4为一个实施例中确定第i-1组第二特征的补充方案的流程示意图;
图5为一个实施例中获取上采样后的特征对应的权重参数的补充方案的流程示意图;
图6为一个实施例中将n组第二特征输入检测网络,得到待检测图像中目标的类别信息和位置信息的补充方案的流程示意图;
图7为一个实施例中目标检测装置的结构框图;
图8为一个实施例中计算机设备的内部结构图。
具体实施方式
为了使本公开的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本公开进行进一步详细说明。应当理解,此处描述的具体实施例仅仅用以解释本公开,并不用于限定本公开。
在一示例性实施例中,以本公开涉及的目标检测方法应用于目标检测设备进行举例说明。其中,该目标检测设备可以是终端,也可以是服务器,还可以是包括终端和服务器的系统,并通过终端和服务器的交互实现。其中,终端可以但不限于是各种个人计算机、笔记本电脑、智能手机、平板电脑和便携式可穿戴设备,服务器可以用独立的服务器或者是多个服务器组成的服务器集群来实现。
在一示例性实施例中,如图1所示,提供了一种目标检测方法,具体可以通过以下步骤实现:
步骤S202,对待检测图像进行特征提取,得到n组不同尺度的第一特征。
其中,n为大于1的整数。
具体地,将待检测图像输入目标检测设备中,目标检测设备对该待检测图像进行多次不同尺度的特征提取,得到n组不同尺度的第一特征。其中,第一特征可以由三维张量构成。不同尺度的第一特征包含有不同的特征信息,例如,一些第一特征图具有丰富的语义信息,而一些第一特征图具有丰富的空间信息。
可选地,目标检测设备可以采用神经网络的骨干网络来对待检测图像进行多尺度特征提取。可选地,神经网络可以为卷积神经网络,例如,采用VGG16,ResNet等网络提取待检测图像中的特征,得到多组不同尺度的第一特征。
步骤S204,将n组不同尺度的第一特征输入第一特征融合网络,该第一特征融合网络包括n个特征融合层,在第n个特征融合层中,将第n组第一特征作为第n组第二特征。
具体地,目标检测设备将得到的n组不同尺度的第一特征输入预设的第一特征融合网络,通过第一特征融合网络中包含的n个特征融合层进行特征融合。为了实现融合目的,首先目标检测设备需要将第n组第一特征作为第n组第二特征。可选地,第n组第二特征可以是第n组第一特征,也就是同一特征在不同功能下对应的不同术语。此外,第n组第二特征也可以是第n组第一特征进一步的处理得到,此时,第n组第二特征与第n组第一特征并非同一特征。
可选地,第n组第一特征通常为尺度最小的第一特征,那么在实现上,目标检测设备可以根据第一特征的尺度大小,将尺度最小的第一特征确定为第n组第一特征。
步骤S206,在第i-1个特征融合层中,获取第i组第二特征以及第i组第二特征对应的权重参数,将第i组第二特征与权重参数相乘,并将相乘得到的特征与第i-1组第一特征融合,得到第i-1组第二特征,直至得到第1组第二特征。
具体地,在本步骤中,采用从n至1的次序进行相邻特征的融合。在融合过程中,针对第i-1个特征融合层中,需要融合的特征一方面来自第i组第二特征与第i组第二特征对应的权重参数的相乘得到的特征,另一方面来自第i-1组第一特征,在得到这两种特征数据后,在第i-1个特征融合层中完成该相乘得到的特征与第i-1组第一特征的融合过程,并得到第i-1组第二特征。以此类推,直到将第2组第二特征与第2组第二特征对应的权重参数的相乘得到的特征与第1组第一特征进行融合得到第1组第二特征。由此,本步骤涉及的特征融合过程完成。
可选地,目标检测设备将相乘得到的特征与第i-1组第一特征进行相加或拼接,得到第i-1组第二特征。
可选地,权重参数可以是预先设置的,也可以是根据各第二特征进一步处理得到的。需要提及,该权重参数主要是用于实现对各第二特征进行取舍,减少数据运算量并提高特征融合的有效性。比方说,当该权重参数为零或者负数时,通过将权重参数与第二特征进行相乘,由此通过判别乘积是否为正数即可实现对特征的取舍。
步骤S208,将n组第二特征输入检测网络,得到待检测图像中目标的类别信息和位置信息。
具体地,由上述实现步骤不难得出,特征融合之后会得到n组第二特征。接下来,目标检测设备将n组第二特征输入检测网络,得到待检测图像中目标的类别信息和位置信息。 可选地,目标检测设备将n组第二特征输入Faster R-CNN网络或者Cascade RCNN的级联网络,最终输出待检测图像中目标的类别信息和位置信息。
上述目标检测方法中,在对特征进行融合时,通过获取第二特征对应的权重参数,并将该第二特征与其对应的权重参数进行一系列运算,可实现对第二特征的取舍,以达到有选择地将第二特征与下一第一特征进行融合的效果,如此能够更加有效地结合不同尺度特征的特征信息,有利于提高目标检测的准确性。
在一示例性实施例中,请参阅图2,获取第i组第二特征对应的权重参数,具体可以通过以下步骤实现:
S212,对第i组第二特征进行全局平均池化,得到池化后的特征;
S214,将池化后的特征输入全连接网络,得到第i组第二特征对应的权重参数。
具体地,为了增强权重参数与第二特征的关联性,提升特征取舍的准确性及有效性,在一示例中,目标检测设备对第i组第二特征进行降维,得到降维后的特征。接下来,目标检测设备将降维后的特征输入全连接网络,得到第i组第二特征对应的权重参数。可选地,目标检测设备对第i组第二特征进行池化操作,得到池化后的特征,即降维后的特征。进一步可选地,目标检测设备对第i组第二特征进行全局平均池化,得到池化后的特征。在另一实施例中,目标检测设备对第i组第二特征进行全局最大池化,得到池化后的特征。可见,获取第i组第二特征对应的权重参数的实现方式包括多种,本实施例不限于上述列举的实现方式。
本公开实施例中,对第二特征进行全局平均池化并经由全连接网络处理得到其对应的权重参数,可增强权重参数与第二特征的关联性,由此该权重参数能够更准确地对特征进行取舍。
在一示例性实施例中,请参阅图3,将第i组第二特征与权重参数相乘,具体可以通过以下步骤实现:
S222,将第i组第二特征进行卷积运算,得到卷积后的特征;
S224,将卷积后的特征与权重参数相乘,得到相乘后的特征。
具体地,目标检测设备对第i组第二特征进行卷积运算,得到卷积后的特征。接下来,目标检测设备将卷积后的特征与权重参数相乘,得到相乘后的特征。
本公开实施例中,采用相乘方式来对第二特征进行取舍,有利于提高特征取舍的准确性。
在一示例性实施例中,涉及在第i-1个特征融合层中,获取第i组第二特征以及第i 组第二特征对应的权重参数,将第i组第二特征与权重参数相乘,并将相乘得到的特征与第i-1组第一特征融合,得到第i-1组第二特征的一种可能的实现过程。在上述实施例的基础上,请参阅图4,步骤S206具体可以通过以下步骤实现:
S2062,在第i-1个特征融合层中,获取第i组第二特征以及第i组第二特征对应的权重参数,将第i组第二特征与权重参数相乘,得到相乘后的特征;
S2064,对相乘后的特征进行上采样,得到上采样后的特征;
S2066,获取上采样后的特征对应的权重参数,将上采样后的特征与权重参数相乘,并将相乘得到的特征与第i-1组第一特征融合,得到第i-1组第二特征。
具体地,考虑到各组第二特征的尺度是不同的,因此,为了提升特征融合时的方便性及准确性,目标检测设备在得到相乘后的特征后,对该相乘后的特征进行上采样,得到上采样后的特征,上采样的目的是将尺度较小的相乘后的特征变大至第i-1组第一特征的尺度大小,这样方便位置对应的特征进行融合。每次特征融合可看做是采用一种门结构(door)来控制可融合的特征,提升融合有效性。
本公开实施例中,采用一种门结构的方式来有选择地融合特征,从而基于融合后的特征进行目标检测,有利于提高目标检测的准确性。
在一示例性实施例中,请参阅图5,获取上采样后的特征对应的权重参数,具体可以通过以下步骤实现:
S206a,对上采样后的特征进行全局平均池化,得到池化后的特征;
S206b,将池化后的特征输入全连接网络,得到上采样后的特征对应的权重参数。
具体地,为了增强权重参数与上采样后的特征的关联性,提升特征取舍的准确性及有效性,在一示例中,目标检测设备对上采样后的特征进行降维,得到降维后的特征。接下来,目标检测设备将降维后的特征输入全连接网络,得到上采样后的特征对应的权重参数。可选地,目标检测设备对上采样后的特征进行池化操作,得到池化后的特征,即降维后的特征。进一步可选地,目标检测设备对上采样后的特征进行全局平均池化,得到池化后的特征。在另一实施例中,目标检测设备对上采样后的特征进行全局最大池化,得到池化后的特征。可见,获取上采样后的特征对应的权重参数的实现方式包括多种,本实施例不限于上述列举的实现方式。
本公开实施例中,对上采样后的特征进行全局平均池化并经由全连接网络处理得到其对应的权重参数,可增强权重参数与上采样后的特征的关联性,由此该权重参数能够更准确地对特征进行取舍。
在一示例性实施例中,在第n个特征融合层中,将第n组第一特征作为第n组第二特征,具体可以通过以下步骤实现:
步骤S232,对第n组第一特征进行全局平局池化,得到池化后的特征;
步骤S234,将池化后的特征与第n组第一特征相加,得到第n组第二特征。
具体地,以第n组第一特征为尺度最小的第一特征为例进行说明,目标检测设备对尺度最小的第一特征进行全局平局池化后,池化后的特征的维度变为N*C*1*1,其中N为批尺寸(batchsize),C为通道数,再将该池化后的特征输入一个1*1卷积网络,变换通道数为256。接下来,目标检测设备利用广播机制(broadcast)使其变为N*256*H*W,即同一个H*W上的像素值相同,然后与尺度最小的第一特征相加,得到第二特征(第n组第二特征)。其中,相加的实现方式可以是:假设尺度最小的第一特征的维度为N*C*H*W,将其输入1*1卷积网络,可变换通道数为256,即维度变为N*256*H*W。此时将维度相同的第一特征与池化后的特征相加,得到第n组第二特征。
本公开实施例中,通过对第n组第一特征进行全局平局池化,可对整个网络在结构上做正则化处理以防止过拟合,有利于提高目标检测的准确性。
在一示例性实施例中,涉及将n组第二特征输入检测网络,得到待检测图像中目标的类别信息和位置信息的一种可能的实现过程。在上述实施例的基础上,请参阅图6,步骤S208具体可以通过以下步骤实现:
S2082,将n组第二特征输入第二特征融合网络,该第二特征融合网络包括n个特征融合层,在第1个特征融合层中,将第1组第二特征作为第1组第三特征;
S2084,在第i个特征融合层中,获取第i-1组第三特征,并将第i-1组第三特征与第i组第二特征融合,得到第i组第三特征,直至得到第n组第三特征;
S2086,将n组第三特征输入检测网络,得到待检测图像中目标的类别信息和位置信息。
具体地,目标检测设备将n组第二特征输入第二特征融合网络,该第二特征融合网络包括n个特征融合层,在第1个特征融合层中,将第1组第二特征作为第1组第三特征。接下来,目标检测设备在第i个特征融合层中,获取第i-1组第三特征,并将第i-1组第三特征与第i组第二特征融合,得到第i组第三特征,直至得到第n组第三特征。接下来,目标检测设备将n组第三特征输入检测网络,得到待检测图像中目标的类别信息和位置信息。
本公开实施例中,通过进一步对特征进行融合,可增强特征的语义信息,并提高小尺 寸目标的检测准确性。
在一示例性实施例中,涉及将n组第二特征输入检测网络,得到待检测图像中目标的类别信息和位置信息的一种可能的实现过程。在上述实施例的基础上,步骤S208具体可以通过以下步骤实现:
S208a,将n组第二特征输入区域生成网络,得到初始候选框;
S208b,将初始候选框输入级联的检测网络,该检测网络包括级联的m个检测子网络,将初始候选框在原始特征上进行感兴趣区域池化操作,并将池化后的特征输入第1级检测子网络,得到第1级的检测框及置信度;
S208c,对于第j-1级检测框,在原始特征上进行感兴趣区域池化操作,并将池化后的特征输入第j级检测子网络,得到第j级的检测框及置信度,直至得到第m级检测框及置信度作为最终结果;
S208d,对最终结果进行非极大值抑制,得到待检测图像中目标的类别信息和位置信息。
具体地,目标检测设备将n组第二特征输入区域生成网络,得到初始候选框B0。接下来,目标检测设备采用级联的m个检测子网络,将初始候选框在原始特征上进行感兴趣区域池化操作,并将池化后的特征输入第1级检测子网络,得到第1级的检测框及置信度。接下来,对于第j-1级检测框,目标检测设备在原始特征上进行感兴趣区域池化操作,并将池化后的特征输入第j级检测子网络,得到第j级的检测框及置信度,直至得到第m级检测框及置信度作为最终结果。接下来,目标检测设备对最终结果进行非极大值抑制,得到待检测图像中目标的类别信息和位置信息。
应该理解的是,虽然图1-6的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1-6中的至少一部分步骤可以包括多个步骤或者多个阶段,这些步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤中的步骤或者阶段的至少一部分轮流或者交替地执行。
在一示例性实施例中,如图7所示,提供了一种目标检测装置,包括:特征提取模块302、特征融合模块304和检测模块306,其中:
该特征提取模块302用于对待检测图像进行特征提取,得到n组不同尺度的第一特征,其中,n为大于1的整数;
该特征融合模块304用于将n组不同尺度的第一特征输入第一特征融合网络,第一特征融合网络包括n个特征融合层,在第n个特征融合层中,将第n组第一特征作为第n组第二特征;
该特征融合模块304还用于在第i-1个特征融合层中,获取第i组第二特征以及第i组第二特征对应的权重参数,将第i组第二特征与权重参数相乘,并将相乘得到的特征与第i-1组第一特征融合,得到第i-1组第二特征,直至得到第1组第二特征;
该检测模块306用于将n组第二特征输入检测网络,得到待检测图像中目标的类别信息和位置信息。
上述目标检测装置中,在对特征进行融合时,通过获取第二特征对应的权重参数,并将该第二特征与其对应的权重参数进行一系列运算,可实现对第二特征的取舍,以达到有选择地将第二特征与下一第一特征进行融合的效果,如此能够更加有效地结合不同尺度特征的特征信息,有利于提高目标检测的准确性。
在一示例性实施例中,该特征融合模块304具体用于对第i组第二特征进行全局平均池化,得到池化后的特征;将池化后的特征输入全连接网络,得到第i组第二特征对应的权重参数。
在一示例性实施例中,该特征融合模块304具体用于将第i组第二特征进行卷积运算,得到卷积后的特征;将卷积后的特征与权重参数相乘,得到相乘后的特征。
在一示例性实施例中,该特征融合模块304具体用于在第i-1个特征融合层中,获取第i组第二特征以及第i组第二特征对应的权重参数,将第i组第二特征与权重参数相乘,得到相乘后的特征;对相乘后的特征进行上采样,得到上采样后的特征;获取上采样后的特征对应的权重参数,将上采样后的特征与权重参数相乘,并将相乘得到的特征与第i-1组第一特征融合,得到第i-1组第二特征。
在一示例性实施例中,该特征融合模块304具体用于对上采样后的特征进行全局平均池化,得到池化后的特征;将池化后的特征输入全连接网络,得到上采样后的特征对应的权重参数。
在一示例性实施例中,该特征融合模块304具体用于对第n组第一特征进行全局平局池化,得到池化后的特征;将池化后的特征与第n组第一特征相加,得到第n组第二特征。
在一示例性实施例中,该检测模块306具体用于将n组第二特征输入第二特征融合网络,第二特征融合网络包括n个特征融合层,在第1个特征融合层中,将第1组第二特征作为第1组第三特征;在第i个特征融合层中,获取第i-1组第三特征,并将第i-1组第三 特征与第i组第二特征融合,得到第i组第三特征,直至得到第n组第三特征;将n组第三特征输入检测网络,得到待检测图像中目标的类别信息和位置信息。
在一示例性实施例中,该检测模块306具体用于将n组第二特征输入区域生成网络,得到初始候选框;将初始候选框输入级联的检测网络,检测网络包括级联的m个检测子网络,将初始候选框在原始特征上进行感兴趣区域池化操作,并将池化后的特征输入第1级检测子网络,得到第1级的检测框及置信度;对于第j-1级检测框,在原始特征上进行感兴趣区域池化操作,并将池化后的特征输入第j级检测子网络,得到第j级的检测框及置信度,直至得到第m级检测框及置信度作为最终结果;对最终结果进行非极大值抑制,得到待检测图像中目标的类别信息和位置信息。
关于目标检测装置的具体限定可以参见上文中对于目标检测方法的限定,在此不再赘述。上述目标检测装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一示例性实施例中,提供了一种计算机设备,该计算机设备可以是服务器,其内部结构图可以如图8所示。该计算机设备800包括通过系统总线82连接的处理器81、存储器和网络接口88。其中,该计算机设备的处理器81用于提供计算和控制能力。该计算机设备的存储器包括非易失性存储介质87、内存储器86。该非易失性存储介质87存储有操作系统83、计算机程序84和数据库85。该内存储器86为非易失性存储介质87中的操作系统和计算机程序的运行提供环境。该计算机设备800的网络接口88用于与外部的终端通过网络连接通信。该计算机程序84被处理器81执行时以实现一种目标检测方法。
本领域技术人员可以理解,图8中示出的结构,仅仅是与本公开方案相关的部分结构的框图,并不构成对本公开方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
在一示例性实施例中,还提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述各方法实施例中的步骤。
在一示例性实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,该计算机程序被处理器执行时实现上述各方法实施例中的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中, 本公开所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和易失性存储器中的至少一种。非易失性存储器可包括只读存储器(Read-Only Memory,ROM)、磁带、软盘、闪存或光存储器等。易失性存储器可包括随机存取存储器(Random Access Memory,RAM)或外部高速缓冲存储器。作为说明而非局限,RAM可以是多种形式,比如静态随机存取存储器(Static Random Access Memory,SRAM)或动态随机存取存储器(Dynamic Random Access Memory,DRAM)等。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。
以上所述实施例仅表达了本公开的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对发明专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本公开构思的前提下,还可以做出若干变形和改进,这些都属于本公开的保护范围。因此,本公开专利的保护范围应以所附权利要求为准。

Claims (11)

  1. 一种目标检测方法,其中,所述方法包括:
    对待检测图像进行特征提取,得到n组不同尺度的第一特征,其中,n为大于1的整数;
    将所述n组不同尺度的第一特征输入第一特征融合网络,所述第一特征融合网络包括n个特征融合层,在第n个特征融合层中,将第n组第一特征作为第n组第二特征;
    在第i-1个特征融合层中,获取第i组第二特征以及所述第i组第二特征对应的权重参数,将所述第i组第二特征与所述权重参数相乘,并将相乘得到的特征与第i-1组第一特征融合,得到第i-1组第二特征,直至得到第1组第二特征;
    将n组第二特征输入检测网络,得到所述待检测图像中目标的类别信息和位置信息。
  2. 根据权利要求1所述的方法,其中,获取第i组第二特征对应的权重参数,包括:
    对所述第i组第二特征进行全局平均池化,得到池化后的特征;
    将所述池化后的特征输入全连接网络,得到所述第i组第二特征对应的权重参数。
  3. 根据权利要求1或2所述的方法,其中,将所述第i组第二特征与所述权重参数相乘,包括:
    将所述第i组第二特征进行卷积运算,得到卷积后的特征;
    将所述卷积后的特征与所述权重参数相乘,得到相乘后的特征。
  4. 根据权利要求1所述的方法,其中,在第i-1个特征融合层中,获取第i组第二特征以及所述第i组第二特征对应的权重参数,将所述第i组第二特征与所述权重参数相乘,并将相乘得到的特征与第i-1组第一特征融合,得到第i-1组第二特征,包括:
    在第i-1个特征融合层中,获取第i组第二特征以及所述第i组第二特征对应的权重参数,将所述第i组第二特征与所述权重参数相乘,得到相乘后的特征;
    对所述相乘后的特征进行上采样,得到上采样后的特征;
    获取所述上采样后的特征对应的权重参数,将所述上采样后的特征与所述权重参数相乘,并将相乘得到的特征与第i-1组第一特征融合,得到第i-1组第二特征。
  5. 根据权利要求4所述的方法,其中,获取所述上采样后的特征对应的权重参数,包括:
    对所述上采样后的特征进行全局平均池化,得到池化后的特征;
    将所述池化后的特征输入全连接网络,得到所述上采样后的特征对应的权重参数。
  6. 根据权利要求1至5中任意一项所述的方法,其中,在第n个特征融合层中,将第n组第一特征作为第n组第二特征,包括:
    对所述第n组第一特征进行全局平局池化,得到池化后的特征;
    将所述池化后的特征与所述第n组第一特征相加,得到第n组第二特征。
  7. 根据权利要求1至6中任意一项所述的方法,其中,将n组第二特征输入检测网络,得到所述待检测图像中目标的类别信息和位置信息,包括:
    将n组第二特征输入第二特征融合网络,所述第二特征融合网络包括n个特征融合层,在第1个特征融合层中,将第1组第二特征作为第1组第三特征;
    在第i个特征融合层中,获取第i-1组第三特征,并将所述第i-1组第三特征与第i组第二特征融合,得到第i组第三特征,直至得到第n组第三特征;
    将n组第三特征输入检测网络,得到所述待检测图像中目标的类别信息和位置信息。
  8. 根据权利要求1至6中任意一项所述的方法,其中,将n组第二特征输入检测网络,得到所述待检测图像中目标的类别信息和位置信息,包括:
    将所述n组第二特征输入区域生成网络,得到初始候选框;
    将所述初始候选框输入级联的检测网络,所述检测网络包括级联的m个检测子网络,将所述初始候选框在原始特征上进行感兴趣区域池化操作,并将池化后的特征输入第1级检测子网络,得到第1级的检测框及置信度;
    对于第j-1级检测框,在原始特征上进行感兴趣区域池化操作,并将池化后的特征输入第j级检测子网络,得到第j级的检测框及置信度,直至得到第m级检测框及置信度作为最终结果;
    对所述最终结果进行非极大值抑制,得到所述待检测图像中目标的类别信息和位置信息。
  9. 一种目标检测装置,其中,所述装置包括:
    特征提取模块,用于对待检测图像进行特征提取,得到n组不同尺度的第一特征,其中,n为大于1的整数;
    特征融合模块,用于将所述n组不同尺度的第一特征输入第一特征融合网络,所述第一特征融合网络包括n个特征融合层,在第n个特征融合层中,将第n组第一特征作为第n组第二特征;
    所述特征融合模块,还用于在第i-1个特征融合层中,获取第i组第二特征以及所述第i组第二特征对应的权重参数,将所述第i组第二特征与所述权重参数相乘,并将相乘 得到的特征与第i-1组第一特征融合,得到第i-1组第二特征,直至得到第1组第二特征;
    检测模块,用于将n组第二特征输入检测网络,得到所述待检测图像中目标的类别信息和位置信息。
  10. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其中,所述处理器执行所述计算机程序时实现权利要求1至8中任一项所述方法的步骤。
  11. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现权利要求1至8中任一项所述的方法的步骤。
PCT/CN2020/119710 2020-04-29 2020-09-30 目标检测方法、装置、计算机设备和存储介质 WO2021218037A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010356470.7 2020-04-29
CN202010356470.7A CN111709415B (zh) 2020-04-29 2020-04-29 目标检测方法、装置、计算机设备和存储介质

Publications (1)

Publication Number Publication Date
WO2021218037A1 true WO2021218037A1 (zh) 2021-11-04

Family

ID=72536888

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/119710 WO2021218037A1 (zh) 2020-04-29 2020-09-30 目标检测方法、装置、计算机设备和存储介质

Country Status (2)

Country Link
CN (1) CN111709415B (zh)
WO (1) WO2021218037A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115496976A (zh) * 2022-08-29 2022-12-20 锋睿领创(珠海)科技有限公司 多源异构数据融合的视觉处理方法、装置、设备及介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709415B (zh) * 2020-04-29 2023-10-27 北京迈格威科技有限公司 目标检测方法、装置、计算机设备和存储介质
CN112528782B (zh) * 2020-11-30 2024-02-23 北京农业信息技术研究中心 水下鱼类目标检测方法及装置

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510012A (zh) * 2018-05-04 2018-09-07 四川大学 一种基于多尺度特征图的目标快速检测方法
CN109034210A (zh) * 2018-07-04 2018-12-18 国家新闻出版广电总局广播科学研究院 基于超特征融合与多尺度金字塔网络的目标检测方法
CN109255352A (zh) * 2018-09-07 2019-01-22 北京旷视科技有限公司 目标检测方法、装置及系统
CN109978863A (zh) * 2019-03-27 2019-07-05 北京青燕祥云科技有限公司 基于x射线图像的目标检测方法及计算机设备
US20190377930A1 (en) * 2018-06-11 2019-12-12 Zkteco Usa, Llc Method and System for Face Recognition Via Deep Learning
CN110647834A (zh) * 2019-09-18 2020-01-03 北京市商汤科技开发有限公司 人脸和人手关联检测方法及装置、电子设备和存储介质
CN111080567A (zh) * 2019-12-12 2020-04-28 长沙理工大学 基于多尺度动态卷积神经网络的遥感图像融合方法及系统
CN111709415A (zh) * 2020-04-29 2020-09-25 北京迈格威科技有限公司 目标检测方法、装置、计算机设备和存储介质

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3022899A4 (en) * 2013-07-15 2016-08-03 Microsoft Technology Licensing Llc COMPRESSION OF IMAGE SET BASED ON CHARACTERISTICS
CN108875902A (zh) * 2017-12-04 2018-11-23 北京旷视科技有限公司 神经网络训练方法及装置、车辆检测估计方法及装置、存储介质
CN109934216B (zh) * 2017-12-19 2021-05-11 华为技术有限公司 图像处理的方法、装置、计算机可读存储介质
CN108509978B (zh) * 2018-02-28 2022-06-07 中南大学 基于cnn的多级特征融合的多类目标检测方法及模型
CN110348453B (zh) * 2018-04-04 2022-10-04 中国科学院上海高等研究院 一种基于级联的物体检测方法及系统、存储介质及终端
CN109241902B (zh) * 2018-08-30 2022-05-10 北京航空航天大学 一种基于多尺度特征融合的山体滑坡检测方法
CN109671070B (zh) * 2018-12-16 2021-02-09 华中科技大学 一种基于特征加权和特征相关性融合的目标检测方法
CN109816671B (zh) * 2019-01-31 2021-09-24 深兰科技(上海)有限公司 一种目标检测方法、装置及存储介质
CN110335270B (zh) * 2019-07-09 2022-09-13 华北电力大学(保定) 基于层级区域特征融合学习的输电线路缺陷检测方法
CN110517224A (zh) * 2019-07-12 2019-11-29 上海大学 一种基于深度神经网络的光伏板缺陷检测方法
CN110752028A (zh) * 2019-10-21 2020-02-04 腾讯科技(深圳)有限公司 一种图像处理方法、装置、设备以及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108510012A (zh) * 2018-05-04 2018-09-07 四川大学 一种基于多尺度特征图的目标快速检测方法
US20190377930A1 (en) * 2018-06-11 2019-12-12 Zkteco Usa, Llc Method and System for Face Recognition Via Deep Learning
CN109034210A (zh) * 2018-07-04 2018-12-18 国家新闻出版广电总局广播科学研究院 基于超特征融合与多尺度金字塔网络的目标检测方法
CN109255352A (zh) * 2018-09-07 2019-01-22 北京旷视科技有限公司 目标检测方法、装置及系统
CN109978863A (zh) * 2019-03-27 2019-07-05 北京青燕祥云科技有限公司 基于x射线图像的目标检测方法及计算机设备
CN110647834A (zh) * 2019-09-18 2020-01-03 北京市商汤科技开发有限公司 人脸和人手关联检测方法及装置、电子设备和存储介质
CN111080567A (zh) * 2019-12-12 2020-04-28 长沙理工大学 基于多尺度动态卷积神经网络的遥感图像融合方法及系统
CN111709415A (zh) * 2020-04-29 2020-09-25 北京迈格威科技有限公司 目标检测方法、装置、计算机设备和存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115496976A (zh) * 2022-08-29 2022-12-20 锋睿领创(珠海)科技有限公司 多源异构数据融合的视觉处理方法、装置、设备及介质
CN115496976B (zh) * 2022-08-29 2023-08-11 锋睿领创(珠海)科技有限公司 多源异构数据融合的视觉处理方法、装置、设备及介质

Also Published As

Publication number Publication date
CN111709415A (zh) 2020-09-25
CN111709415B (zh) 2023-10-27

Similar Documents

Publication Publication Date Title
US10586350B2 (en) Optimizations for dynamic object instance detection, segmentation, and structure mapping
US10733431B2 (en) Systems and methods for optimizing pose estimation
US10796452B2 (en) Optimizations for structure mapping and up-sampling
WO2021218037A1 (zh) 目标检测方法、装置、计算机设备和存储介质
CN111670457B (zh) 动态对象实例检测、分割和结构映射的优化
EP3493106B1 (en) Optimizations for dynamic object instance detection, segmentation, and structure mapping
US20200257902A1 (en) Extraction of spatial-temporal feature representation
CN114549913B (zh) 一种语义分割方法、装置、计算机设备和存储介质
EP3493104A1 (en) Optimizations for dynamic object instance detection, segmentation, and structure mapping
US20220004849A1 (en) Image processing neural networks with dynamic filter activation
US20230051237A1 (en) Determining material properties based on machine learning models
WO2021253938A1 (zh) 一种神经网络的训练方法、视频识别方法及装置
WO2023197857A1 (zh) 一种模型切分方法及其相关设备
CN114638823B (zh) 基于注意力机制序列模型的全切片图像分类方法及装置
CN116977265A (zh) 缺陷检测模型的训练方法、装置、计算机设备和存储介质
CN113298248B (zh) 一种针对神经网络模型的处理方法、装置以及电子设备
CN117593619B (zh) 图像处理方法、装置、电子设备及存储介质
WO2023236900A1 (zh) 一种项目推荐方法及其相关设备
CN116894802B (zh) 图像增强方法、装置、计算机设备和存储介质
WO2024061123A1 (zh) 一种图像处理方法及其相关设备
WO2023231796A1 (zh) 一种视觉任务处理方法及其相关设备
Li et al. IPE transformer for depth completion with input-aware positional embeddings
CN118053161A (zh) 卡面信息识别方法、装置、设备、存储介质和程序产品
JP2024071354A (ja) イメージを処理する電子装置及びその動作方法
CN117541868A (zh) 图像分类模型的训练方法、图像分类方法、模型、计算机设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20933047

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20933047

Country of ref document: EP

Kind code of ref document: A1