WO2021129691A1 - 一种对目标检测方法以及相应装置 - Google Patents

一种对目标检测方法以及相应装置 Download PDF

Info

Publication number
WO2021129691A1
WO2021129691A1 PCT/CN2020/138740 CN2020138740W WO2021129691A1 WO 2021129691 A1 WO2021129691 A1 WO 2021129691A1 CN 2020138740 W CN2020138740 W CN 2020138740W WO 2021129691 A1 WO2021129691 A1 WO 2021129691A1
Authority
WO
WIPO (PCT)
Prior art keywords
type
image features
image
convolution
detection
Prior art date
Application number
PCT/CN2020/138740
Other languages
English (en)
French (fr)
Inventor
谢伟
黄倩倩
连春燕
胡荣东
Original Assignee
长沙智能驾驶研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 长沙智能驾驶研究院有限公司 filed Critical 长沙智能驾驶研究院有限公司
Publication of WO2021129691A1 publication Critical patent/WO2021129691A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Definitions

  • This application relates to the field of image processing technology, and in particular to a target detection method and corresponding device.
  • the target When capturing images, such as capturing images in a driving scene, the target will show the characteristics of "near big, far small". Small targets in the distance have higher requirements for detection algorithms. Under normal circumstances, driving scenes have relatively high requirements for the timeliness of the algorithm. High-resolution images are usually sent to the neural network with compressed resolution, and small targets in the distance will become smaller in the compressed image. At the same time, the detection of distant objects plays an important role in the later prediction. The effect of one-stage algorithms such as YOLO and SSD on small targets is still limited. It takes a long time for the two-stage network to be applied to driving scenarios.
  • RefineDet extracts the rough target frame through the ARM module, and then refines the candidate frame through the ODM module to improve the detection of small targets to a certain extent. Since the RefineDet algorithm is divided into four stages, the size of the anchors in each stage is set differently. Lower-level anchors are smaller in size, and higher-level anchors are larger. The low-level location information is rich and the semantic information is weak, and the high-level location information is weak and the semantic information is rich. The design idea of RefineDet is to predict small targets through low-level features, and high-level ones are mainly responsible for other targets. The low-level features have a smaller receptive field, which will affect the accuracy of detection. At the same time, the threshold for selecting positive and negative samples during RefineDet training is 0.5, and the cost function of the algorithm uses cross entropy, which will cause the classification information and location information of the model inference to not match.
  • this application proposes a target detection method, which is characterized in that it includes: acquiring the first-type image features of multiple levels of the original image; Perform hole convolution processing respectively, and accordingly generate second-type image features of different levels, where the expansion rate of the hole convolution processing is different for different levels; and based on the second-type image features, or based on the first Class image features and second class image features are detected and the target frame is determined through regression operation.
  • the resolution of the multiple levels decreases as the level increases; the expansion rate of the hole convolution processing decreases as the level increases.
  • the multiple levels include a convolutional layer and a global pooling layer, and the global pooling layer has the highest level.
  • performing at least hole convolution processing on the image features of the first type at different levels, and correspondingly generating the image features of the second type at different levels includes: performing dimensionality reduction convolution processing on the image features of the first type to obtain Dimensionality reduction feature processing result; performing hole convolution processing on the dimensionality reduction feature processing result to obtain a hole convolution processing result; respectively performing a first decomposition convolution processing and a second decomposition convolution processing on the hole convolution processing results to obtain A first decomposition convolution processing result and a second decomposition convolution processing result; connecting the first decomposition convolution processing result and the second decomposition convolution processing result to obtain a connection processing result; at least based on the connection processing As a result, the second-type image features of the different levels are determined.
  • performing at least hole convolution processing on the image features of the first type at different levels to generate the image features of the second type at different levels accordingly further comprising: at least according to the dimensionality reduction feature processing result and the connection processing result Perform residual processing to obtain the second-type image features of the different levels.
  • it also includes fusing the second-type image features at different levels; and, based on the fused second-type image features, or based on the first-type image features and the fused first-type image features.
  • the second-class image features are detected and the final target frame is determined through regression operation.
  • the feature fusion of the second-type image features of different levels includes: for each level, the second-type image features of all levels above the current level are subjected to up-sampling processing with the second-type image features of the current level. Fusion. .
  • the up-sampling processing includes deconvolution processing or bilinear interpolation processing.
  • the operation includes: obtaining the value of the intersection ratio cost function based on at least the intersection ratio score between the candidate box and the actual box, and calculating the candidate box and the actual ratio based on at least the value of the intersection ratio cost function The value of the loss function between the boxes.
  • the operation also includes: calculating the value of the loss function between the candidate box and the actual box, the value of the confidence classification cost function obtained based on the confidence score of the candidate box, and the coordinate regression of the candidate box and the actual box The value of the cost function.
  • detecting based on the image features of the first type and the image features of the second type and determining the convolution kernel and the marking frame for the network model through regression operation includes: in the first stage, at least based on the The original image, the preset candidate frame and the initial value of the convolution kernel calculate the intersection ratio cost function value and the loss function value between the candidate frame and the actual frame, and output after regression operation; and in the second stage Calculate the intersection ratio cost function value and the loss function value between the candidate frame and the actual frame based on at least the second type of image feature and the candidate frame and the convolution kernel output in the first stage, and pass The regression operation performs output; at least fitting is performed based on the outputs of the first stage and the second stage to obtain a total loss function value, and the candidate box and the convolution kernel corresponding to the minimum total loss function value are used as the final output.
  • the present application further includes a method and device for detecting a target, including: an image feature acquisition module configured to acquire multiple levels of image features of the original image; a cavity convolution module, coupled to the image feature acquisition module, configured to separately The first-type image features at different levels are processed by hole convolution, and the second-type image features at different levels are generated correspondingly, wherein the expansion rate of the hole convolution processing on different levels is different; the detection information determination module is coupled to the level fusion The module is configured to perform detection based on the fused second-type image features, or based on the first-type image features and the fused second-type image features, and determine the target frame through a regression operation.
  • it further includes: a level fusion module, coupled to the hole convolution module, and configured to fuse image features of the second type at different levels; wherein the detection information determination module is further configured to be based on the fused The second type of image feature, or the detection is performed based on the first type of image feature and the fused second type of image feature, and the final target frame is determined through a regression operation.
  • a level fusion module coupled to the hole convolution module, and configured to fuse image features of the second type at different levels
  • the detection information determination module is further configured to be based on the fused The second type of image feature, or the detection is performed based on the first type of image feature and the fused second type of image feature, and the final target frame is determined through a regression operation.
  • an initial target detection module coupled to the image feature acquisition module and the detection information determination module, configured to receive the first type of image features output by the image feature acquisition module, and perform detection based on the first type of image features , Sending the detection result to the detection information determination module to optimize the detection process of the detection information determination module.
  • the present application further includes a computer device including a memory and a processor, the memory storing a computer program, and is characterized in that the processor implements the steps of the foregoing method when the computer program is executed by the processor.
  • the present application further includes a computer device including a memory and a processor, the memory storing a computer program, and is characterized in that the processor implements the steps of the foregoing method when the computer program is executed by the processor.
  • the present application further includes a computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by a processor, the steps of the aforementioned method are implemented.
  • the present application further includes an intelligent driving device, including: a processor, and a memory coupled with the processor; and a sensing unit configured to obtain an original image; wherein the processor is configured to execute the aforementioned method.
  • an intelligent driving device including: a processor, and a memory coupled with the processor; and a sensing unit configured to obtain an original image; wherein the processor is configured to execute the aforementioned method.
  • Fig. 1 is a schematic flowchart of a target detection method according to an embodiment of the present application
  • FIG. 2 is a schematic diagram of a specific flow of step 100 according to an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a specific flow of step 200 according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a specific process of determining a convolution kernel and a label frame in the training process according to an embodiment of the present application
  • Fig. 5 is a schematic structural diagram of a network model for target detection according to an embodiment of the present application.
  • Fig. 6 is a schematic diagram of a hollow convolutional layer of a network model according to an embodiment of the present application.
  • Fig. 7 is a schematic diagram of a feature fusion layer of a network model according to an embodiment of the present application.
  • Fig. 8 is a schematic diagram of a target detection unit of a network model according to an embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a target detection device according to an embodiment of the present application.
  • Fig. 10 is an internal structure diagram of a computer device according to an embodiment of the present application.
  • Fig. 1 is a schematic flowchart of a target detection method according to an embodiment of the present application. As shown in Fig. 1, the method includes steps 100 to 400:
  • Step 100 Acquire image features of the first type at multiple levels of the original image.
  • the original image refers to the image that needs to be detected.
  • the original image may be an image captured by a camera, for example, an image captured by a vehicle-mounted camera while the vehicle is in motion.
  • the image features of the first type at multiple levels refer to image features with different levels in multiple levels obtained by feature extraction and convolution on the input original image.
  • the higher the layer, the lower the resolution, and the lowest layer has the highest resolution, which is more conducive to the recognition of small targets.
  • the highest layer may be the global pooling layer.
  • Step 200 Perform at least hole convolution processing on the image features of the first type at different levels, respectively, and correspondingly generate the image features of the second type at different levels.
  • hole convolution processing in this step, other processing can also be performed in order to improve the processing effect, which will be introduced in subsequent embodiments.
  • the convolution processing is the standard convolution processing.
  • rate>1 such as 2
  • the expansion rate of the hole convolution processing is different for different levels.
  • the feature resolution of the first-type image of multiple levels decreases as the level increases; the expansion rate of the hole convolution processing decreases as the level increases.
  • the greater the expansion rate of the low-level hollow convolution the more the receptive field of the low-level target can be enhanced, the context information of the target can be enhanced, and the detection effect can be improved.
  • the receptive field of the higher level becomes larger as the level increases, so a relatively small expansion rate can be set.
  • step 300 image features of the second type at different levels are fused.
  • the order of this fusion can be from the lower resolution level to the fusion layer by layer, that is, from the highest layer to the layer fusion downward.
  • feature fusion may be performed according to preset rules first to obtain the image features of the second type after fusion.
  • the preset rule may be to merge the second-type image features of the current level with the second-type image features of all levels above the current level after up-sampling in the descending order of resolution.
  • the highest level such as the global pooling layer, since there are no other levels above it, the second-type image features of the highest level can be directly used for detection.
  • the fusion level corresponding to the pre-fusion level is generated after the fusion, and the number of levels remains unchanged.
  • the obtained fusion feature can be considered as the second type of image feature fused with multi-level features, that is, the fusion feature is richer, which can improve the accuracy of target detection.
  • step 300 is not necessary, and it can jump directly to step 400 for detection.
  • step 400 performs detection based on the second type of image features, or based on the first type of image features and the second type of image features, and determines the target frame through a regression operation.
  • Step 400 Perform detection based on the fused second-type image features, or based on the first-type image features and the fused second-type image features, and determine the final target frame through a regression operation.
  • the target frame is determined among multiple candidate frames as the detection result.
  • This detection can be a single-stage detection of the image features of the second type, or it can be a preliminary detection of the image features of the first type first, and then the final detection of the image features of the second type based on the results of the preliminary detection.
  • the preliminary detection result can include the IOU score of the candidate frame, and the result can be used as the positive and negative sample of the candidate frame in the final detection. standard.
  • the intersection score of candidate frames can be used to sort, so as to eliminate redundant candidate frames.
  • the pre-trained convolution kernel and the labeled box in the network model are used for detection, and the labeled box is regressed according to the actual situation and the target box is determined as the detection result.
  • step 100 obtains the image features of the first type at multiple levels of the original image, including steps 110 to 120:
  • Step 110 Obtain original image features according to the original image.
  • Step 120 Perform convolution processing and/or global pooling processing on the original image features to obtain the first type of image features at different levels of the original image.
  • the original image feature refers to the image feature obtained by extracting the original image input feature into the basic network and outputting it.
  • Xception39 may be used as the basic network to perform feature extraction on the original image to obtain the original image feature.
  • the Xception39 grouping convolution structure is used to accelerate the image feature extraction to ensure the real-time feature extraction of the original image.
  • step 110 may also be implemented through a network structure capable of feature extraction, such as VGG (Visual Geometry Group Network), ResNet, or SEnet.
  • multiple convolution processing and/or global pooling processing may be used.
  • it can be based on Xception39, from the lowest level to the highest level (taking 5 levels as an example), adding 4 additional convolutional layers (ExtrConv1, ExtrConv2, ExtrConv3, ExtrConv4) and 1 global pool Glob Pooling.
  • the 4 additional convolutional layers use a convolution operation with a step size of 2, and the feature size is gradually reduced; the global semantic information of the image can be effectively obtained through the global pooling layer, thereby enhancing the context information of target detection.
  • the global pooling layer may not be added.
  • step 200 performs hole convolution processing on the first-type image features of different levels respectively, and correspondingly generates the second-type image features of different levels, wherein the expansion rate of the hole convolution processing is different for different levels.
  • step 210 dimensionality reduction convolution processing is performed on the image features of the first type to obtain a dimensionality reduction feature processing result.
  • the convolution processing of the preset step size refers to the use of a smaller convolution kernel (for example, 1*1) to reduce the dimensionality of the first type of image features to reduce the amount of calculation and obtain the dimensionality reduction feature processing result ,
  • the value of the preset step length can be 2 or other values.
  • Step 220 Perform hole convolution processing on the dimensionality reduction feature processing result to obtain a hole convolution processing result.
  • the obtained results are respectively subjected to hole convolution processing to increase the receptive field.
  • the dimensionality reduction feature processing result is subjected to the hole convolution processing, the image features of different levels correspond to The expansion rate is different. According to one embodiment, the lower the level or the higher the resolution, the higher the expansion rate.
  • Step 230 Perform the first decomposition convolution processing and the second decomposition convolution processing on the hole convolution processing results, respectively, to obtain the first decomposition convolution processing result and the second decomposition convolution processing result.
  • the process of convolution using a convolution kernel with a size of a*b can be decomposed into two convolution kernels: a*1 and 1*a and 1*b and b*1 for convolution.
  • Product processing Specifically, the convolution kernels of sizes a*1 and 1*a can be used to perform the first convolution processing on the hole convolution processing result to obtain the first convolution processing result, and the sizes of 1*b and b*1 can be used.
  • the convolution kernel performs the second convolution processing on the hole convolution processing result to obtain the second convolution processing result.
  • both a and b are integers greater than 1. Compared with the direct convolution of a*a or b*b, this arrangement reduces the calculation amount by several times.
  • Arranging the first and second decomposition and convolution processing in parallel for a and b is to improve the feature fusion capability.
  • Step 240 Connect the first decomposition and convolution processing result and the second decomposition and convolution processing result to obtain the connection processing result.
  • concat processing is performed on the first decomposition and convolution processing result and the second decomposition and convolution processing result.
  • Step 250 Determine the second-type image features of the different levels at least based on the connection processing result.
  • residual processing can be performed based on the result of the dimensionality reduction processing and the result of the connection processing, so as to determine the second type of image feature, which can prevent the network from being difficult to converge.
  • fusing the second-type image features at different levels may include:
  • the second-type image features of the upper level adjacent to the current level can be received, or the fused second-type image features of the upper level can be received.
  • the received image features of the second type are up-sampled, so that the matrix dimension of the high-level features after up-sampling is the same as the matrix dimension of the second-type image features of the current level.
  • the up-sampling feature and the second-type image feature of the current level are feature-fused, so that the information of the fused image feature obtained can be richer, and the accuracy of the target detection result can be improved.
  • the up-sampling processing may include: deconvolution processing (for the fusion between convolutional layers) or bilinear interpolation processing (for the fusion of the global pooling layer and the convolutional layer).
  • the deconvolution process can be considered as the inverse process of the convolution process, so as to realize the image upsampling process.
  • the bilinear interpolation process can be used for upsampling.
  • the image characteristics of the up-sampled graphics can be preserved as much as possible to facilitate image fusion.
  • Fig. 4 is a schematic diagram of a specific process of determining a convolution kernel and a label frame in the training process according to an embodiment of the present application.
  • the actual frame is known, and the loss function between multiple candidate frames and the actual frame can be used to determine the convolution kernel and the labeled frame for the network model through regression operation.
  • Step 410 Calculate the value of the intersection-over-union cost function based at least on the difference between the Intersection-over-Union (IOU) score between the candidate box and the actual box.
  • the cost function of IOU score L iou is as follows:
  • t represents the detection input, Indicates the IOU score, Represents the IOU label, that is, the IOU score of the actual frame. It is defined as:
  • smooth L1 represents the L1SmoothLoss cost function
  • O(x,anchors ⁇ x,y,w,h ⁇ ) represents the intersection of the detection input and anchors
  • U(x,anchors ⁇ x,y,w,h ⁇ ) represents Detect the union of input and anchors.
  • N pos represents the number of positive samples of the candidate frame.
  • j represents the j-th detection target, and j is an integer greater than or equal to 1.
  • i represents the i-th in multiple candidate frames corresponding to the j-th detection target, and i is an integer greater than or equal to 1.
  • k is the layer number of the image feature, and k is an integer greater than or equal to 1.
  • the IOU is used as the criterion for distinguishing positive and negative samples in the candidate frame classification operation described later.
  • a value of IOU can be set as a threshold, and candidate frames greater than the threshold can be used as positive samples, otherwise as negative samples.
  • IOU non-maximum suppression
  • the value of the coordinate cost function is calculated at least based on the coordinate difference between the candidate frame and the actual frame.
  • the initial values of the candidate frame and the convolution kernel can be manually set, and a rectangular frame that exactly matches the target image is generated as the actual frame (ground truth bound).
  • the center point of the candidate box is O1(x1,y1), the length is H1, and the width is W1; the center point of the actual box is O2(x2,y2), the length is H2, and the width is W2, then the regression parameters corresponding to the candidate box It can be calculated by the following formula:
  • the regression parameters calculated by formula (3) include the difference ⁇ x of the x coordinate of the center point O1 and the center point O2 and the difference ⁇ y of the y coordinate, the difference ⁇ H between the length of the candidate frame and the actual frame, and the difference of the target frame. Width difference ⁇ W.
  • smooth L1 represents the L1SmoothLoss cost function
  • Candidate frame coordinates Is the coordinates of the actual frame.
  • the value of the confidence classification cost function is calculated based at least on the confidence scores of the positive and negative samples of the candidate frame.
  • the confidence cost function L conf can be expressed as
  • Step 440 Calculate the loss function value between the candidate frame and the actual frame based on at least the intersection and ratio cost function, and optionally the coordinate cost function and the confidence cost function.
  • the loss function LOSS can be expressed as:
  • N pos represents the number of positive samples of the candidate frame.
  • ⁇ , ⁇ represent the weight coefficients of the coordinate cost function and the intersection ratio cost function, and determine the influence factors of these three cost functions.
  • the above method can be applied to the two stages of network model training.
  • the original image can be used as the input t, and the initial value of the candidate frame and the initial value of the convolution kernel can be manually set.
  • the IOU threshold for dividing the positive and negative samples can also be manually set, and then through multiple regression operations, the candidate frame and the convolution kernel when the loss function Loss1 is minimized are obtained as the output.
  • the fused second-type image features can be used as input t, and the candidate frame and convolution kernel output from the first stage can be used as the initial values of the candidate frame and convolution kernel in the second stage. After multiple regressions, the output of the second stage can be used as the final candidate frame and convolution kernel.
  • is a coefficient that balances the detection results of the first stage and the second stage.
  • the first stage of detection may not be performed, and the second type of image feature may be directly detected.
  • the input original image is divided into multiple layers according to the resolution, and the hole convolution operation with different expansion rates is performed for different layers.
  • the hole convolution expansion rate of the bottom layer with higher resolution is The larger the value, the more conducive to capturing small target image features.
  • the method disclosed in this application uses the IOU score as a determining factor in the calculation of the loss function between the candidate box and the actual box during the network model training stage, which is used to avoid the traditional ranking based on the confidence score and the omission of the optimal solution. may.
  • This embodiment mainly performs target detection on the input image through the trained network model.
  • FIG 5 this is a schematic diagram of the structure of the network model used in this embodiment.
  • the network model mainly includes a first target detection structure (first detection), a feature extraction structure, and a second target detection structure (second detection). Detection).
  • the original input image size is 768x448x3, where 768x448 is the resolution size, and 3 is the number of channels (the graphic size of other images has the same meaning).
  • the size of the original image features extracted through the basic network Xception39 is 192x112x1024.
  • This embodiment performs 4 convolution processing and 1 global pooling processing on the original image features, and is obtained through additional convolution layers 1, 2, 3, and 4.
  • the size of the image feature is 96x56x512, 48x28x512, 24x14x512, 12x7x512, and the size of the image feature obtained by the global pooling layer is 1x1x512.
  • an image generally includes a single channel and 3 channels, and the channels of the extracted features at this time far exceed 3, for example, the number of channels of the original image feature is 1024, and the number of channels of the image features of different resolutions is 512. Strictly speaking, the feature at this time can no longer be called an image, so it is called an image feature.
  • the fused feature fusion layer is a layer that merges the hole convolution layer of this level with all the hole convolution layers above this level that have been upsampled.
  • the feature fusion layer 1 is a layer that merges the features of the hole convolution layer 1 with the up-sampled hole convolution layers 2, 3, 4, and 5 together.
  • the hole convolution layer 5 since there is no other hole convolution layer above it, it can be directly provided to the second target detection unit.
  • the hollow convolution layer is composed of multiple convolution layers and a hollow convolution layer with a different expansion rate for each layer.
  • the hollow convolution layer is composed of a 1x1 convolutional layer, a 3x3 convolutional layer and a hollow convolutional layer with different expansion rates.
  • FIG. 6 it is a schematic diagram of the hole convolution layer.
  • the hole convolution layer first uses a 1x1 convolution layer to reduce the dimensionality of the image features, and connects the processing results to the hole convolution layer with different expansion rates.
  • a hollow convolutional layer with a larger expansion rate can be used, and as the level increases, the expansion rate gradually decreases.
  • the expansion rate of the corresponding hole convolutional layer 1 can be set to 7; the additional convolutional layer 2, the additional convolutional layer 3, the additional convolutional layer 4, and the global pooling
  • the corresponding expansion ratios can be set to 5, 3, 2, 1 in sequence.
  • a one-dimensional decomposed convolutional layer composed of 1x5, 5x1 can be included after the hole convolutional layer.
  • the decomposed convolutional layer is divided from two dimensions (for example, horizontal and vertical, namely The two paths in the figure) perform convolution operations, which can greatly reduce the amount of calculation.
  • the decomposed convolution layer is connected through the connection processing layer, that is, the concat layer.
  • the 1x5, 5x1 combination here is just an example, and convolutional layers of other dimensions can also be used.
  • FIG. 7 is a schematic diagram of the feature layer fusion operation according to an embodiment of the present application.
  • the feature fusion layer after fusion is the convolution layer of the holes of this level and all the holes above the level that have been upsampled. Convolutional layers merged together.
  • the matrix dimension of the high-level image feature is the same as the size of the low-level image feature.
  • the up-sampling method corresponding to the hole convolution layer 5 is bilinear interpolation processing
  • the up-sampling method corresponding to the hole convolution layer 4, 3, 2, and 1 is deconvolution processing.
  • Fig. 8 is a schematic diagram of the target detection unit according to an embodiment of the present application.
  • the corresponding input is the first type of image feature; when it is the first target detection unit for final detection In the case of two target detection units, the corresponding input is the second type of image feature feature.
  • the output of the target detection unit includes the confidence score branch, the position parameter branch, and the intersection ratio score branch, and the target frame as the detection result can be determined based on at least the output of these three branches.
  • the first detection or preliminary detection result of each image feature obtained by each first target detection unit in the first detection structure can be provided to the image feature of the corresponding level for the second target detection or Said in the final inspection process.
  • the IOU score in the preliminary detection result can be used in the final detection as a criterion for distinguishing the positive and negative samples of the candidate frame.
  • the training process of the aforementioned network model is explained.
  • the training of the network model is mainly the training of the parameters of the first detection structure and the second detection structure in the network model and the MDC module.
  • the network model can be trained using a sample image with a label.
  • the label includes the ground truth and the corresponding target classification information and the intersection score.
  • the image features of different levels of the sample image are obtained through the feature extraction structure in the network model.
  • target detection is performed on the image features of different levels of the sample image, and then according to each target
  • the output result of the detection unit and the corresponding label data optimize the parameters of the target detection unit to obtain a trained network model. It can be understood that the image processing process in the training process is the same as the process described in the previous embodiment of the present application, and will not be repeated here.
  • Fig. 9 shows a device for training a target detection network model according to an embodiment of the present application.
  • the target detection device includes the following modules:
  • the image feature acquiring module 10 is configured to acquire the image features of the first type at multiple levels of the original image
  • the hole convolution module 20, coupled to the image feature acquisition module 10, is configured to perform hole convolution processing on the first-type image features of different levels respectively, and correspondingly generate the second-type image features of different levels.
  • the expansion rate of the hole convolution processing is different;
  • the level fusion module 30 is coupled to the hole convolution module 20, and is configured to fuse the second-type image features of different levels;
  • the detection information determination module 40 coupled to the level fusion module 30, is configured to perform detection based on the second type of image feature, or based on the first type of image feature and the second type of image feature, and determine the final convolution kernel and the final convolution kernel through regression operation. Candidate box.
  • the device further includes:
  • the initial target detection module 50 coupled to the image feature acquisition module 10 and the detection information determination module 40, is configured to receive the first type of image features output by the image feature acquisition module, and based on the detection of the first type of image features, will detect The result is sent to the detection information determination module to optimize the detection process of the detection information determination module.
  • Each module in the above-mentioned target detection device can be implemented in whole or in part by software, hardware, and a combination thereof.
  • the above-mentioned modules may be embedded in the form of hardware or independent of the processor in the computer equipment, or may be stored in the memory of the computer equipment in the form of software, so that the processor can call and execute the operations corresponding to the above-mentioned modules.
  • a computer device including a memory and a processor, and a computer program is stored in the memory, and the processor implements the processing steps of the target detection method described in the foregoing embodiments when the processor executes the computer program.
  • Fig. 10 shows an internal structure diagram of a computer device in an embodiment.
  • the computer device may specifically be a terminal (or server).
  • the computer equipment includes a processor, a memory, a network interface, an input device, and a display screen connected through a system bus.
  • the memory includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium of the computer device stores an operating system and may also store a computer program.
  • the processor can realize the target detection method.
  • a computer program may also be stored in the internal memory, and when the computer program is executed by the processor, the processor can execute the target detection method.
  • the display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen.
  • the input device of the computer equipment can be a touch layer covered on the display screen, or it can be a button, trackball or touchpad set on the housing of the computer equipment. It can be an external keyboard, touchpad, or mouse.
  • FIG. 10 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the computer device to which the solution of the present application is applied.
  • the specific computer device may Including more or fewer parts than shown in the figure, or combining some parts, or having a different arrangement of parts.
  • the present application further includes an intelligent driving device, which includes: a processor, and a memory coupled with the processor; and a sensing unit configured to obtain an original image.
  • an intelligent driving device which includes: a processor, and a memory coupled with the processor; and a sensing unit configured to obtain an original image.
  • the processor is configured to execute the aforementioned method for training the target detection network model.
  • a computer-readable storage medium on which a computer program is stored.
  • the processing steps of the method for training the target detection network model described in the above embodiments are realized. .
  • Non-volatile memory may include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory may include random access memory (RAM) or external cache memory.
  • RAM is available in many forms, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous chain Channel (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • SLDRAM synchronous chain Channel
  • memory bus Radbus direct RAM
  • RDRAM direct memory bus dynamic RAM
  • RDRAM memory bus dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

一种对目标检测方法和装置、一种计算机设备、一种智能驾驶设备以及一种计算机可读存储介质,该方法包括:获取原始图像多个层级的第一类图像特征(100);对不同层级的第一类图像特征至少分别进行空洞卷积处理,相应产生不同层级的第二类图像特征,其中针对不同层级进行所述空洞卷积处理的膨胀率不同(200);以及基于所述第二类图像特征,或者基于所述第一类图像特征以及第二类图像特征进行检测并通过回归操作确定目标框(400)。

Description

一种对目标检测方法以及相应装置 技术领域
本申请涉及图像处理技术领域,特别地涉及一种对目标检测方法以及相应装置。
背景技术
在获取图像时,例如捕捉驾驶场景中的图像,目标会呈现“近大远小”的特点。远处较小的目标对检测算法要求较高。通常情况下,驾驶场景对算法的及时性要求比较高,高分辨率的图像通常会压缩分辨率送入到神经网络中,远处的小目标在压缩的图像中会变得更小。同时,远处的目标检测对后期的预测有重要作用。YOLO,SSD等一阶段算法对小目标的效果仍然有限。二阶段网络应用于驾驶场景的耗时较长。RefineDet通过ARM模块提取初略的目标框,然后通过ODM模块精细化候选框,在一定程度上改善小目标的检测。由于RefineDet算法分为四个阶段,每个阶段的anchors大小设置不同。低层级的anchors尺寸较小,高层级的anchors尺寸较大。低层级的位置信息丰富,语义信息较弱,高层级的位置信息较弱,语义信息丰富。RefineDet的设计思路在于通过低层级的特征预测小目标,高层级主要负责其他目标。低层级的特征的感受野较小,会影响检测的精度。同时,RefineDet训练的时候选择正负样本的阈值是0.5,算法的代价函数用的是交叉熵,这样会导致模型推理的分类信息和位置信息不匹配。
发明内容
针对现有技术中存在的技术问题,本申请提出了一种对目标检测方法,其特征在于,包括:获取原始图像多个层级的第一类图像特征;对不同层级的第 一类图像特征至少分别进行空洞卷积处理,相应产生不同层级的第二类图像特征,其中针对不同层级进行所述空洞卷积处理的膨胀率不同;以及基于所述第二类图像特征,或者基于所述第一类图像特征以及第二类图像特征进行检测并通过回归操作确定目标框。
特别的,其中所述多个层级的分辨率随着所述层级的升高而降低;所述空洞卷积处理的膨胀率随着所述层级的升高而降低。
特别的,其中所述多个层级包括卷积层和全局池化层,且全局池化层层级最高。
特别的,其中对不同层级的第一类图像特征至少分别进行空洞卷积处理,相应产生不同层级的第二类图像特征,包括:对所述第一类图像特征进行降维卷积处理,得到降维特征处理结果;对降维特征处理结果进行空洞卷积处理,得到空洞卷积处理结果;分别对所述空洞卷积处理结果进行第一分解卷积处理以及第二分解卷积处理,得到第一分解卷积处理结果以及第二分解卷积处理结果;对所述第一分解卷积处理结果以及所述第二分解卷积处理结果进行连接,得到连接处理结果;至少基于所述连接处理结果确定所述不同层级的第二类图像特征。
特别的,其中对不同层级的第一类图像特征至少分别进行空洞卷积处理,相应产生不同层级的第二类图像特征,还包括:至少根据所述降维特征处理结果以及所述连接处理结果进行残差处理,得到所述不同层级的第二类图像特征。
特别的,还包括,对不同层级的所述第二类图像特征进行融合;并且,基于经融合的所述第二类图像特征,或者基于所述第一类图像特征以及经融合的所述第二类图像特征进行检测并通过回归操作确定最终的目标框。
特别的,其中,对不同层级的第二类图像特征进行特征融合包括:针对每一层级,将本级以上所有层级的第二类图像特征经过上采样处理以后与本级的第二类图像特征融合。。
特别的,其中所述上采样处理包括反卷积处理或双线性插值处理。
特别的,还包括对网络模型进行训练时,基于所述第二类图像特征,或者基于所述第一类图像特征以及所述第二类图像特征进行检测并通过回归操作确定用于网络模型的卷积核以及标记框,该操作包括:至少基于候选框与实际框之间交并比得分获得交并比代价函数的值,并且至少基于所述交并比代价函数的值计算候选框与实际框之间的损失函数的值。
特别的,其中对网络模型进行训练时,基于所述第二类图像特征,或者基于所述第一类图像特征以及所述第二类图像特征进行检测并通过回归操作确定用于网络模型的卷积核以及标记框,该操作还包括:计算候选框与实际框之间损失函数的值还基于候选框置信度得分而获得的置信度分类代价函数的值,以及候选框与实际框获得坐标回归的代价函数的值。
特别的,其中基于所述第一类图像特征以及所述第二类图像特征进行检测并通过回归操作确定用于网络模型的卷积核以及标记框,包括:在第一阶段,至少基于所述原始图像以及预设的候选框和卷积核初始值计算所述交并比代价函数值以及所述候选框与实际框间的损失函数值,并经过回归操作后进行输出;以及在第二阶段,至少基于所述第二类图像特征以及所述第一阶段输出的候选框和卷积核核计算所述交并比代价函数值以及所述候选框与实际框间的损失函数值,并经过回归操作进行输出;至少基于所述第一阶段和第二阶段的输出进行拟合获得总损失函数值,并将所述总损失函数值最小时对应的候选框和卷积核作为最终输出。
本申请进一步包括一种对目标检测方法装置,包括:图像特征获取模块,配置为获取原始图像多个层级的第一类图像特征;空洞卷积模块,耦合至图像特征获取模块,配置为分别对不同层级的第一类图像特征进行空洞卷积处理,相应产生不同层级的第二类图像特征,其中针对不同层级进行所述空洞卷积处理的膨胀率不同;检测信息确定模块,耦合至层级融合模块,配置为基于融合 后的第二类图像特征,或者基于所述第一类图像特征以及融合后的第二类图像特征进行检测并通过回归操作确定目标框。
特别的,进一步包括:层级融合模块,耦合至所述空洞卷积模块,配置为对不同层级的第二类图像特征进行融合;其中,所述检测信息确定模块进一步配置为基于经融合的所述第二类图像特征,或者基于所述第一类图像特征以及经融合的所述第二类图像特征进行检测并通过回归操作确定最终的目标框。
特别的,进一步包括:初始目标检测模块,耦合至图像特征获取模块和检测信息确定模块,配置为接收所述图像特征获取模块输出的第一类图像特征,并基于对第一类图像特征进行检测,将检测结果发至所述检测信息确定模块以优化所述检测信息确定模块的检测过程。
本申请进一步包括一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现前述方法的步骤。
本申请进一步包括一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程序,其特征在于,所述处理器执行所述计算机程序时实现前述方法的步骤。
本申请进一步包括一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现前述的方法的步骤。
本申请进一步包括一种智能驾驶设备,包括:处理器,以及与所述处理器耦合的存储器;以及传感单元,配置为获取原始图像;其中所述处理器配置为执行前述的方法。
附图说明
下面,将结合附图对本申请的优选实施方式进行进一步详细的说明,其中:
图1为根据本申请一个实施例中目标检测方法的流程示意图;
图2为根据本申请一个实施例中步骤100的具体流程示意图;
图3为根据本申请一个实施例中步骤200的具体流程示意图;
图4为根据本申请一个实施例中训练过程中确定卷积核和标记框的具体流程示意图;
图5为根据本申请一个实施例中用于进行目标检测的网络模型的结构示意图;
图6为根据本申请一个实施例中网络模型的空洞卷积层的示意图;
图7为根据本申请一个实施例中网络模型的特征融合层的示意图;
图8为根据本申请一个实施例中网络模型的目标检测单元的示意图;
图9为根据本申请一个实施例中目标检测装置的结构示意图;
图10为根据本申请一个实施例中计算机设备的内部结构图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在以下的详细描述中,可以参看作为本申请一部分用来说明本申请的特定实施例的各个说明书附图。在附图中,相似的附图标记在不同图式中描述大体上类似的组件。本申请的各个特定实施例在以下进行了足够详细的描述,使得具备本领域相关知识和技术的普通技术人员能够实施本申请的技术方案。应当理解,还可以利用其它实施例或者对本申请的实施例进行结构、逻辑或者电性的改变。
图1为根据本申请一个实施例中目标检测方法的流程示意图,如图1所示,该方法包括步骤100至步骤400:
步骤100,获取原始图像多个层级的第一类图像特征。
其中,原始图像是指需要进行目标检测的图像。在一些实施例中,原始图像可以是摄像头捕捉的图像,例如车载摄像头拍摄到的车辆行驶过程中的图像。多个层级的第一类图像特征是指,通过对输入的原始图像进行特征提取和卷积得到的多个层级各不相同的图像特征。越高的层分辨率越低,而最底层的分辨率最高更有利于小目标的识别。根据一个实施例,最高层可以是全局池化层。
步骤200,分别对不同层级的第一类图像特征至少进行空洞卷积处理,相应产生不同层级的第二类图像特征。当然,在这个步骤除了空洞卷积处理,为了提升处理效果还可以进行其他处理,将在后续的实施例中介绍。
在得到不同层级的第一类图像特征后,还包括对各第一类图像特征进行空洞卷积处理。所谓空洞是指在原图上做采样,采样的频率是根据膨胀率(rate)来设置的,当rate为1时候,原图不丢失任何信息采样,此时卷积处理就是标准的卷积处理,当rate>1,比如2的时候,在原图上每隔1(rate-1=2-1=1)个像素采样,通过对原图进行采样得到的图像。
在一些实施例中,针对不同层级进行所述空洞卷积处理的膨胀率不同。其中,多个层级的第一类图像特征分辨率随着所述层级的升高而降低;所述空洞卷积处理的膨胀率随着所述层级的升高而降低。低层级的空洞卷积的膨胀率越大,越能增强低层级目标的感受野,增强目标的上下文信息,提升检测效果。高层级的感受野随着层级的增加变大,所以可以设置相对小的膨胀率。
可选择的,在步骤300,对不同层级的第二类图像特征进行融合。这个融合的顺序可以是从分辨率低的层级逐层向下融合,也就是从最高层逐层向下融合。
具体地,对于不同层级的第二类图像特征,可以先按照预设规则进行特征融合,得到融合后的第二类图像特征。在一些实施例中,预设规则可以是按照分辨率从小到大的顺序,将本层级的第二类图像特征与经上采样后的本级以上 所有层级的第二类图像特征相融合。对于最高层级例如全局池化层来说,由于其以上没有其他层级,因此最高层级的第二类图像特征可以被直接用于检测。在一些实施例中,融合后产生了融合前层级对应的融合层级,层级数不变。
通过进行图像特征融合,得到的融合特征可以认为是融合了多层级特征的第二类图像特征,即融合后特征更加丰富,从而可以提高目标检测的准确性。
在一些实施例中,步骤300不是必要的,可以直接跳转到步骤400进行检测。这种情况下,步骤400基于所述第二类图像特征,或者基于所述第一类图像特征以及第二类图像特征进行检测并通过回归操作确定目标框
步骤400,基于融合后的第二类图像特征,或者基于所述第一类图像特征以及融合后的第二类图像特征进行检测并通过回归操作确定最终的目标框。
在实际的检测过程中,实际框(ground truth bound)未知,通过利用经过训练的网络模型,在多个候选框之中确定目标框作为检测结果。这个检测可以是对第二类图像特征进行的单一阶段检测,也可以是对第一类图像特征先进行初步的检测,然后基于初步检测的结果再对第二类图像特征进行最终检测。在这种两个阶段的检测模式中,例如,初步检测结果中可以包括候选框的交并比(IOU)得分,并可以将该结果在最终检测中并作为选取候选框的正负样本时的标准。另外,在最终检测阶段,可以利用候选框的交并比得分进行排序,从而排除多余的候选框。
无论是哪种类型的检测,都会利用网络模型中提前训练好的卷积核以及标记框进行检测,并且根据实际情况对该标记框进行回归操作并确定目标框作为检测结果。
在一个实施例中,如图2所示,步骤100获取原始图像多个层级的第一类图像特征,包括步骤110至步骤120:
步骤110,根据原始图像得到原始图像特征。
步骤120,对原始图像特征进行卷积处理和/或全局池化处理,得到原始图 像的不同层级的第一类图像特征。
其中,原始图像特征是指通过将原始图像输入特征提取基础网络而输出得到的图像特征。在一些实施例中,可以是通过Xception39作为基础网络对原始图像进行特征提取,得到原始图像特征。这样在能够有效提取特征的前提下,利用Xception39分组卷积结构对图像特征提取进行加速,保证原始图像特征提取的实时性。可选地,步骤110也可以是通过VGG(Visual Geometry Group Network)、ResNet或者SENet等可以进行特征提取的网络结构实现。
在一些实施例中,根据原始图像特征得到不同层级的第一类图像特征时,可以是采用多次卷积处理和/或全局池化处理。在一些实施例中,可以是在Xception39的基础上,从最低层级至最高层级(以5个层级为例),添加4个额外卷积层(ExtrConv1,ExtrConv2,ExtrConv3,ExtrConv4)和1个全局池化层(Glob Pooling)。其中,4个额外卷积层采用步长为2的卷积运算,特征大小逐步减小;通过全局池化层可以有效获取图像的全局语义信息,从而增强目标检测的上下文信息。当然根据其他实施例,也可以不增加全局池化层。
如图3所示,步骤200分别对不同层级的第一类图像特征进行空洞卷积处理,相应产生不同层级的第二类图像特征,其中针对不同层级进行所述空洞卷积处理的膨胀率不同,可以包括:
可选择的,步骤210,对所述第一类图像特征进行降维卷积处理,得到降维特征处理结果。
在一些实施例中,预设步长的卷积处理是指利用较小的卷积核(例如1*1)对第一类图像特征进行降维,以减少计算量,得到降维特征处理结果,预设步长的取值可以是2或者其他值。
步骤220,对降维特征处理结果进行空洞卷积处理,得到空洞卷积处理结果。
在一些实施例中,在降维卷积之后,将得到的结果分别进行空洞卷积处理, 以增大感受野,在对降维特征处理结果进行空洞卷积处理时,不同层级的图像特征对应的膨胀率不同。根据一个实施例,层级越低或者说分辨率越高的层级,膨胀率越高。
步骤230,分别对空洞卷积处理结果进行第一分解卷积处理以及第二分解卷积处理,得到第一分解卷积处理结果以及第二分解卷积处理结果。
在一些实施例中,对于采用大小为a*b的卷积核进行卷积的处理过程,可以分解为采用a*1与1*a和1*b与b*1两组卷积核进行卷积处理。具体地,可以使用大小为a*1与1*a的卷积核对空洞卷积处理结果进行第一卷积处理以得到第一卷积处理结果,以及使用大小为1*b与b*1的卷积核对空洞卷积处理结果进行第二卷积处理以得到第二卷积处理结果。其中,a和b都是大于1的整数。这样的安排相比于直接进行a*a或者b*b的卷积计算量要减小数倍。安排a和b两路并行的第一和第二分解卷积处理是为了提高特征融合的能力。
步骤240,对第一分解卷积处理结果以及第二分解卷积处理结果进行连接,得到连接处理结果。
在一些实施例中,对第一分解卷积处理结果以及第二分解卷积处理结果进行连接处理,就是concat处理。
步骤250,至少基于所述连接处理结果确定所述不同层级的第二类图像特征。
根据另一实施例,可以基于降维处理的结果与连接处理结果进行残差处理,从而确定第二类图像特征,可以防止网络难以收敛。
对于步骤300,对不同层级的第二类图像特征进行融合,可以包括:
首先可以接收与当前层级相邻的上一层级的第二类图像特征,或者经融合的上一层级的第二类图像特征。其次,对接收到的第二类图像特征进行上采样,使得上采样之后的高层级特征的矩阵维度与当前层级第二类图像特征的矩阵维度大小相同。再将上采样特征与当前层级第二类图像特征进行特征融合,从 而可以使得得到的融合图像特征的信息更加丰富,提高目标检测结果的准确性。
在一个实施例中,上采样处理可以包括:包括反卷积处理(针对卷积层之间的融合)或双线性插值处理(针对全局池化层和卷积层的融合)。
在一个实施例中,反卷积处理可以认为是卷积处理的逆过程,从而实现图像的上采样处理。另外,对于全局池化处理图像,由于双线性插值处理可以实现任意图像大小变化,因此可以通过双线性插值处理进行上采样。本实施例通过对不同类型的图像采取不同的上采样处理策略,可以尽可能地保留上采样图形的图像特征,便于进行图像融合。
图4为根据本申请一个实施例中训练过程中确定卷积核和标记框的具体流程示意图。在训练的过程中实际框已知,可以利用多个候选框与实际框之间的损失函数,通过回归操作来确定用于网络模型的卷积核以及标记框。
步骤410,至少基于候选框与实际框之间的交并比(Intersection-over-Union,IOU)得分之差计算交并比代价函数的值。IOU得分L iou代价函数如下所示:
Figure PCTCN2020138740-appb-000001
t表示检测输入,
Figure PCTCN2020138740-appb-000002
表示IOU得分,
Figure PCTCN2020138740-appb-000003
表示IOU标签,即实际框的IOU得分。其定义为:
Figure PCTCN2020138740-appb-000004
其中,smooth L1表示L1SmoothLoss代价函数,O(x,anchors∈{x,y,w,h})表示检测输入和anchors的交集,U(x,anchors∈{x,y,w,h})表示检测输入和anchors的并集。N pos表示候选框的正样本数量。cx,cy为候选框中心点坐标;w,h为候选框的宽和高。j表示第j个检测目标,j为大于等于1的整数。i表示第j个检测目标对应的多个候选框中第i个,i为大于等于1的整数。k为图像特征的层数序号,k为大于等于1的整数。
在现有的应用中,将IOU作为后面介绍的候选框分类操作中区分正负样本的标准,例如可以设置一个IOU的值作为阈值,大于该阈值的候选框作为正样本,否则作为负样本。
另一个对IOU的现有的应用是将其作为非极大抑制(NMS)的筛选手段,用来去除冗余的候选框。例如当多个候选框彼此之间的IOU值很高,则说明重合度很高,因此可以仅保留其中一个候选框。
但是,在本申请以前,并没有人将IOU的分数作为候选框的排序基础,或者说并没有人将IOU的代价函数值作为计算候选框与实际的之间的损失函数值的基础。这样做的优势在于可以降低仅仅依靠分类操作中的置信度得分进行排序而造成的将次优解保留而将最优解筛除的概率。
可选的,步骤420,至少基于候选框与是实际框之间的坐标之差计算坐标代价函数的值。
在一个实施例中,可以手动设置候选框和卷积核的起始值,另产生一个与目标图像完全匹配的矩形框,作为实际框(ground truth bound)。其中候选框的中心点为O1(x1,y1),长为H1,宽为W1;实际框的中心点为O2(x2,y2),长为H2,宽为W2,则候选框框对应的回归参数可以通过以下公式计算得到:
Figure PCTCN2020138740-appb-000005
通过公式(3)计算得到的回归参数包括中心点O1与中心点O2的x坐标的差值△x以及y坐标的差值△y,候选框与实际框长的差值△H以及目标框的宽的差值△W。
坐标代价函数定义L loc如下所示
Figure PCTCN2020138740-appb-000006
其中,smooth L1表示L1SmoothLoss代价函数,
Figure PCTCN2020138740-appb-000007
候选框坐标,
Figure PCTCN2020138740-appb-000008
为实际 框的坐标。
可选的,步骤430,至少基于候选框的正、负样本的置信度得分计算置信度分类代价函数的值。
置信度代价函数L conf可以表示为
Figure PCTCN2020138740-appb-000009
其中
Figure PCTCN2020138740-appb-000010
Figure PCTCN2020138740-appb-000011
其中,
Figure PCTCN2020138740-appb-000012
表示分类的标签。
步骤440,至少基于交并比代价函数,可选的还基于坐标代价函数和置信度代价函数来计算候选框与实际框之间的损失函数值。
损失函数LOSS可以表示为:
Figure PCTCN2020138740-appb-000013
其中,N pos表示候选框正样本数量。α,β表示坐标代价函数和交并比代价函数的权重系数,决定了这三个代价函数的影响因子。根据一个实施例可以设置α=β=1,当然也可以根据需要设为不同的值。
根据不同的实施例,上述方法可以应用于网络模型训练的两个阶段,在第一个阶段,可以将原始图像作为输入t,候选框的初始值和卷积核的初始值都可以手动设置,划分正负样本的IOU阈值也可以手动设置,然后通过多次的回归操作,得到使得损失函数Loss1最小时的候选框以及卷积核作为输出。
在第二个阶段,可以将经过融合的第二类图像特征作为输入t,将第一阶段的输出的候选框和卷积核作为第二阶段中候选框和卷积核的初始值。在经过多次回归后,可以将第二阶段的输出作为最终确定的候选框和卷积核。
当然根据其他实施例,也可以对第一阶段和第二阶段输出之和进行拟合,如下式,并将使得Loss值最小时的结果作为最终确定的候选框和卷积核:
LOSS=LOSS 1+λLOSS 2    (7)
其中λ是平衡第一阶段检测结果和第二阶段检测结果的系数,根据一个实施例可以设置λ=1,当然也可以将其设为其他值。
当然正如前面介绍的,也可以不进行第一阶段的检测,而直接对第二类图像特征进行检测。
本申请所公开的方法中将输入的原始图像根据分辨率的高低分为多个层,并针对不同的层进行膨胀率不同的空洞卷积操作,分辨率越高的底层的空洞卷积膨胀率越大,这样更有利于捕捉小目标图像特征。
此外,本申请公开的方法在网络模型训练阶段将IOU得分作为一个决定因素纳入了候选框与实际框之间损失函数的计算,用于避免传统的仅仅基于置信度得分排序而遗漏最优解的可能。
基于上述方法,下面提供一种目标检测方法的应用实例。本实施例主要是通过训练好的网络模型对输入图像进行目标检测。如图5所示,为本实施例中所使用的网络模型的结构示意图,该网络模型主要包括第一目标检测结构(第一次检测)、特征提取结构以及第二目标检测结构(第二次检测)。
本实施例中,原始输入图像尺寸为768x448x3,其中,768x448为分辨率大小,3为通道数(其他图像的图形尺寸含义对应相同)。通过基础网络Xception39提取的原始图像特征的尺寸为192x112x1024,本实施例对原始图像特征进行了4次卷积处理以及1次全局池化处理,通过额外卷积层1、2、3、4得到的图像特征的尺寸分别为96x56x512、48x28x512、24x14x512、12x7x512,通过全局池化层得到的图像特征的尺寸为1x1x512。通常来说,图像一般包括单通道以及3通道,而此时提取得到的特征的通道远远超过了3个,例如,原始图像特征的通道数为1024,不同分辨率的图像特征的通道数为512,严格意义上来讲,此时的特征不能再被称为图像了,因此称为图像特征。
如图5所示,经融合后的特征融合层是将本层级的空洞卷积层与经上采样的本层级以上的所有空洞卷积层融合在一起的层。例如特征融合层1是将空洞 卷积层1的特征与分别经上采样的空洞卷积层2、3、4、5融合在一起的层。而空洞卷积层5,由于其以上并没有其他的空洞卷积层,因此其可以被直接提供给第二目标检测单元。
为了增强感受野,本实施例提出空洞卷积层(Multi Dilate Convolution,MDC)模型,空洞卷积层由多个卷积层和针对每层不同膨胀率的空洞卷积层组成,例如,具体可以是由1x1的卷积层、3x3的卷积层和不同膨胀率的空洞卷积层组成。
如图6所示,为空洞卷积层的一个示意图,空洞卷积层首先利用1x1的卷积层对图像特征进行降维处理,并将处理结果接入不同膨胀率的空洞卷积层。具体地,对于低层级的图像特征,可以采用膨胀率较大的空洞卷积层,随着层级的提高,膨胀率逐渐减小。例如,对于图5中的额外卷积层1,其对应的空洞卷积层1的膨胀率可以设为7;额外卷积层2、额外卷积层3、额外卷积层4以及全局池化层,其对应的膨胀率可以依次设为5、3、2、1。
另外,为了进一步扩大感受野,在空洞卷积层之后还可以包括例如由1x5,5x1组合而成的一维的分解卷积层,分解卷积层分别从两个维度(例如横向和纵向,即图中的两路)进行卷积操作,可以大量减少计算量,分解卷积层通过连接处理层,即concat层进行连接。当然,这里的1x5,5x1组合只是一个例子,也可以采用其他维度的卷积层。
图7为根据本申请一个实施例中特征层融合操作的示意图,如图7所示,经融合后的特征融合层是将本层级的空洞卷积层与经上采样的本层级以上的所有空洞卷积层融合在一起的层。,通过上采样处理以使得高层级的图像特征的矩阵维度与其低一级的层级图像特征大小相同。其中,空洞卷积层5对应的上采样方法为双线性插值处理,空洞卷积层4、3、2、1对应的上采样方法为反卷积处理。
图8为根据本申请一个实施例中目标检测单元的示意图,当为用于进行初 步检测的第一目标检测单元时,对应的输入为第一类图像特征;当为用于进行最终检测的第二目标检测单元时,对应的输入为第二类图像特征特征。目标检测单元的输出包括置信度分数分支、位置参数分支、交并比分数分支,并且至少基于这三个分支的输出可以确定作为检测结果的目标框。
需要说明的是,通过第一检测结构中的各第一目标检测单元得到的各图像特征的第一次检测或者说初步检测结果,可以被提供到对应层级的图像特征进行第二次目标检测或者说最终检测的过程中。例如初步检测结果中的IOU分数可以用来在最终检测中用来作为区分候选框正负样本的标准。
在一个实施例中,对上述网络模型的训练过程进行解释说明。参考图6,可以理解,对网络模型的训练主要是对网络模型中第一检测结构以及第二检测结构以及MDC模块的参数的训练。
在构建好网络模型的基础架构后,可以使用带有标签(label)的样本图像对网络模型进行训练,标签包括实际框(ground truth)以及对应的目标分类信息和交并比分数。首先通过网络模型中的特征提取结构得到样本图像的不同层级的图像特征,然后按照本申请前实施例中描述的处理策略,对样本图像的不同层级的图像特征分别进行目标检测,继而根据各目标检测单元的输出结果以及对应的标签数据,对目标检测单元的参数进行优化,从而得到训练好的网络模型。可以理解,训练过程中的图像处理过程与本申请前实施例中所描述的过程相同,在此不再赘述。
图9所示为根据本申请一个实施例的一种对目标检测网络模型进行训练的装置,如图9所示,该目标检测装置包括以下模块:
图像特征获取模块10,配置为获取原始图像多个层级的第一类图像特征;
空洞卷积模块20,耦合至图像特征获取模块10,配置为分别对不同层级的第一类图像特征进行空洞卷积处理,相应产生不同层级的第二类图像特征,其中针对不同层级进行所述空洞卷积处理的膨胀率不同;
可选的,层级融合模块30,耦合至空洞卷积模块20,配置为对不同层级的第二类图像特征进行融合;
检测信息确定模块40,耦合至层级融合模块30,配置为基于第二类图像特征,或者基于所述第一类图像特征以及第二类图像特征进行检测并通过回归操作确定最终的卷积核以及候选框。
在一些实施例中,所述的装置,进一步包括:
初始目标检测模块50,耦合至图像特征获取模块10和检测信息确定模块40,配置为接收所述图像特征获取模块输出的第一类图像特征,并基于对第一类图像特征进行检测,将检测结果发至所述检测信息确定模块以优化所述检测信息确定模块的检测过程。
关于目标检测装置的具体限定可以参见上文中对于目标检测方法的限定,在此不再赘述。上述目标检测装置中的各个模块可全部或部分通过软件、硬件及其组合来实现。上述各模块可以硬件形式内嵌于或独立于计算机设备中的处理器中,也可以以软件形式存储于计算机设备中的存储器中,以便于处理器调用执行以上各个模块对应的操作。
在一个实施例中,提供了一种计算机设备,包括存储器和处理器,存储器中存储有计算机程序,该处理器执行计算机程序时实现上述各实施例所描述的目标检测方法的处理步骤。
图10示出了一个实施例中计算机设备的内部结构图。该计算机设备具体可以是终端(或服务器)。如图10所示,该计算机设备包括该计算机设备包括通过系统总线连接的处理器、存储器、网络接口、输入装置和显示屏。其中,存储器包括非易失性存储介质和内存储器。该计算机设备的非易失性存储介质存储有操作系统,还可存储有计算机程序,该计算机程序被处理器执行时,可使得处理器实现目标检测方法。该内存储器中也可储存有计算机程序,该计算机程序被处理器执行时,可使得处理器执行目标检测方法。计算机设备的显示 屏可以是液晶显示屏或者电子墨水显示屏,计算机设备的输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图10中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的计算机设备的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
本申请进一步包括一种智能驾驶设备,该设备包括:处理器,以及与所述处理器耦合的存储器;以及传感单元,配置为获取原始图像。
其中所述处理器配置为执行前述对目标检测网络模型进行训练的方法。
在一个实施例中,提供了一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器执行时实现上述各实施例所描述的对目标检测网络模型进行训练的方法的处理步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,计算机程序可存储于一非易失性计算机可读取存储介质中,该计算机程序在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
在合理条件下应当理解,虽然前文各实施例涉及的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,各流程图中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
上述实施例仅供说明本申请之用,而并非是对本申请的限制,有关技术领域的普通技术人员,在不脱离本申请范围的情况下,还可以做出各种变化和变型,因此,所有等同的技术方案也应属于本申请公开的范畴。

Claims (17)

  1. 一种对目标检测方法,其特征在于,包括:
    获取原始图像多个层级的第一类图像特征;
    对不同层级的第一类图像特征至少分别进行空洞卷积处理,相应产生不同层级的第二类图像特征,其中针对不同层级进行所述空洞卷积处理的膨胀率不同;以及
    基于所述第二类图像特征,或者基于所述第一类图像特征以及第二类图像特征进行检测并通过回归操作确定目标框。
  2. 根据权利要求1所述的方法,其中所述多个层级的分辨率随着所述层级的升高而降低;所述空洞卷积处理的膨胀率随着所述层级的升高而降低。
  3. 根据权利要求1所述的方法,其中所述多个层级包括卷积层和全局池化层,且全局池化层层级最高。
  4. 根据权利要求1所述的方法,其中对不同层级的第一类图像特征至少分别进行空洞卷积处理,相应产生不同层级的第二类图像特征,包括:
    对所述第一类图像特征进行降维卷积处理,得到降维特征处理结果;
    对降维特征处理结果进行空洞卷积处理,得到空洞卷积处理结果;
    分别对所述空洞卷积处理结果进行第一分解卷积处理以及第二分解卷积处理,得到第一分解卷积处理结果以及第二分解卷积处理结果;
    对所述第一分解卷积处理结果以及所述第二分解卷积处理结果进行连接,得到连接处理结果;
    至少基于所述连接处理结果确定所述不同层级的第二类图像特征。
  5. 根据权利要求4所述的方法,其中对不同层级的第一类图像特征至少分别进行空洞卷积处理,相应产生不同层级的第二类图像特征,还包括:
    至少根据所述降维特征处理结果以及所述连接处理结果进行残差处理,得到所述不同层级的第二类图像特征。
  6. 根据权利要求1所述的方法,还包括,对不同层级的所述第二类图像特征进行融合;以及
    基于经融合的所述第二类图像特征,或者基于所述第一类图像特征以及经融合的所述第二类图像特征进行检测并通过回归操作确定最终的目标框。
  7. 根据权利要求6所述的方法,其中,对不同层级的第二类图像特征进行特征融合包括:
    针对每一层级,将本级以上所有层级的第二类图像特征经过上采样处理以后与本级的第二类图像特征融合。
  8. 根据权利要求7所述的方法,其中所述上采样处理包括反卷积处理或双线性插值处理。
  9. 根据权利要求1所述的方法,还包括对网络模型进行训练时,基于所述第二类图像特征,或者基于所述第一类图像特征以及所述第二类图像特征进行检测并通过回归操作确定用于网络模型的卷积核以及标记框,该操作包括:
    至少基于候选框与实际框之间交并比得分获得交并比代价函数的值,并且至少基于所述交并比代价函数的值计算候选框与实际框之间的损失函数的值。
  10. 根据权利要求9所述的方法,其中对网络模型进行训练时,基于所述第二类图像特征,或者基于所述第一类图像特征以及所述第二类图像特征进行检测并通过回归操作确定用于网络模型的卷积核以及标记框,该操作还包括:
    计算候选框与实际框之间损失函数的值还基于候选框置信度得分而获得的置信度分类代价函数的值,以及候选框与实际框获得坐标回归的代价函数的值。
  11. 根据权利要求9或10所述的方法,其中基于所述第一类图像特征以及所述第二类图像特征进行检测并通过回归操作确定用于网络模型的卷积核以及标记框,包括:
    在第一阶段,至少基于所述原始图像以及预设的候选框和卷积核初始值计算所述交并比代价函数值以及所述候选框与实际框间的损失函数值,并经过回 归操作后进行输出;以及
    在第二阶段,至少基于所述第二类图像特征以及所述第一阶段输出的候选框和卷积核核计算所述交并比代价函数值以及所述候选框与实际框间的损失函数值,并经过回归操作进行输出;
    至少基于所述第一阶段和第二阶段的输出进行拟合获得总损失函数值,并将所述总损失函数值最小时对应的候选框和卷积核作为最终输出。
  12. 一种对目标检测方法装置,包括:
    图像特征获取模块,配置为获取原始图像多个层级的第一类图像特征;
    空洞卷积模块,耦合至图像特征获取模块,配置为分别对不同层级的第一类图像特征进行空洞卷积处理,相应产生不同层级的第二类图像特征,其中针对不同层级进行所述空洞卷积处理的膨胀率不同;
    检测信息确定模块,耦合至层级融合模块,配置为基于所述第二类图像特征,或者基于所述第一类图像特征以及第二类图像特征进行检测并通过回归操作确定目标框。
  13. 根据权利要求12所述的装置,进一步包括:
    层级融合模块,耦合至所述空洞卷积模块,配置为对不同层级的第二类图像特征进行融合;
    其中,所述检测信息确定模块进一步配置为基于所述第二类图像特征,或者基于所述第一类图像特征以及所述第二类图像特征进行检测并通过回归操作确定最终的目标框。
  14. 根据权利要求12所述的装置,进一步包括:
    初始目标检测模块,耦合至图像特征获取模块和检测信息确定模块,配置为接收所述图像特征获取模块输出的第一类图像特征,并基于对第一类图像特征进行检测,将检测结果发至所述检测信息确定模块以优化所述检测信息确定模块的检测过程。
  15. 一种计算机设备,包括存储器和处理器,所述存储器存储有计算机程 序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1至11中任一项所述方法的步骤。
  16. 一种智能驾驶设备,包括:
    处理器,以及与所述处理器耦合的存储器;以及
    传感单元,配置为获取所述原始图像;
    其中所述处理器配置为执行权利要求1-11任一所述的方法。
  17. 一种计算机可读存储介质,其上存储有计算机程序,其中所述计算机程序被处理器执行时实现权利要求1至11中任一项所述的方法的步骤。
PCT/CN2020/138740 2019-12-23 2020-12-23 一种对目标检测方法以及相应装置 WO2021129691A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911333161.1 2019-12-23
CN201911333161.1A CN110751134B (zh) 2019-12-23 2019-12-23 目标检测方法、装置、存储介质及计算机设备

Publications (1)

Publication Number Publication Date
WO2021129691A1 true WO2021129691A1 (zh) 2021-07-01

Family

ID=69285956

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/138740 WO2021129691A1 (zh) 2019-12-23 2020-12-23 一种对目标检测方法以及相应装置

Country Status (2)

Country Link
CN (1) CN110751134B (zh)
WO (1) WO2021129691A1 (zh)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505834A (zh) * 2021-07-13 2021-10-15 阿波罗智能技术(北京)有限公司 训练检测模型、确定图像更新信息和更新高精地图的方法
CN113743333A (zh) * 2021-09-08 2021-12-03 苏州大学应用技术学院 一种草莓熟度识别方法及装置
CN114067222A (zh) * 2022-01-17 2022-02-18 航天宏图信息技术股份有限公司 一种城市水体遥感分类方法及装置
CN114170575A (zh) * 2022-02-11 2022-03-11 青岛海尔工业智能研究院有限公司 一种火焰识别方法、装置、电子设备及存储介质
CN114913094A (zh) * 2022-06-07 2022-08-16 中国工商银行股份有限公司 图像修复方法、装置、计算机设备、存储介质和程序产品
CN115082801A (zh) * 2022-07-27 2022-09-20 北京道达天际科技股份有限公司 一种基于遥感图像的飞机型号识别系统和方法
CN115239946A (zh) * 2022-06-30 2022-10-25 锋睿领创(珠海)科技有限公司 小样本迁移学习训练、目标检测方法、装置、设备和介质
CN115272779A (zh) * 2022-09-28 2022-11-01 广东顺德工业设计研究院(广东顺德创新设计研究院) 液滴识别方法、装置、计算机设备和存储介质
CN117037173A (zh) * 2023-09-22 2023-11-10 武汉纺织大学 一种二阶段的英文字符检测与识别方法及系统
CN117152422A (zh) * 2023-10-31 2023-12-01 国网湖北省电力有限公司超高压公司 一种紫外图像无锚框目标检测方法及存储介质、电子设备
US12125266B1 (en) 2023-10-31 2024-10-22 State Grid Hubei Extra High Voltage Company Anchor-free object detection method based on ultraviolet image, storage medium and electrical equipment

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110751134B (zh) * 2019-12-23 2020-05-12 长沙智能驾驶研究院有限公司 目标检测方法、装置、存储介质及计算机设备
CN113496150B (zh) * 2020-03-20 2023-03-21 长沙智能驾驶研究院有限公司 密集目标检测方法、装置、存储介质及计算机设备
CN111476219B (zh) * 2020-06-02 2024-09-17 苏州科技大学 智能家居环境中图像目标检测方法
CN111723723A (zh) * 2020-06-16 2020-09-29 东软睿驰汽车技术(沈阳)有限公司 一种图像检测方法及装置
CN111783797B (zh) * 2020-06-30 2023-08-18 杭州海康威视数字技术股份有限公司 目标检测方法、装置及存储介质
CN112084865A (zh) * 2020-08-06 2020-12-15 中国科学院空天信息创新研究院 目标检测方法、装置、电子设备和存储介质
CN111832668B (zh) * 2020-09-21 2021-02-26 北京同方软件有限公司 一种自适应特征及数据分布的目标检测方法
CN112446378B (zh) * 2020-11-30 2022-09-16 展讯通信(上海)有限公司 目标检测方法及装置、存储介质、终端
CN113642535B (zh) * 2021-10-13 2022-01-25 聊城高新生物技术有限公司 一种生物分支检测方法、装置及电子设备
CN114842324A (zh) * 2022-03-16 2022-08-02 南京邮电大学 一种基于学习神经网络的伪装目标检测方法及系统
CN114789440B (zh) * 2022-04-22 2024-02-20 深圳市正浩创新科技股份有限公司 基于图像识别的目标对接方法、装置、设备及其介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190156144A1 (en) * 2017-02-23 2019-05-23 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting object, method and apparatus for training neural network, and electronic device
CN109977817A (zh) * 2019-03-14 2019-07-05 南京邮电大学 基于深度学习的动车组车底板螺栓故障检测方法
CN110363211A (zh) * 2018-04-10 2019-10-22 北京四维图新科技股份有限公司 检测网络模型和目标检测方法
CN110378398A (zh) * 2019-06-27 2019-10-25 东南大学 一种基于多尺度特征图跳跃融合的深度学习网络改进方法
CN110751134A (zh) * 2019-12-23 2020-02-04 长沙智能驾驶研究院有限公司 目标检测方法、存储介质及计算机设备

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2003289116A1 (en) * 2002-12-16 2004-07-09 Canon Kabushiki Kaisha Pattern identification method, device thereof, and program thereof
CN107657626B (zh) * 2016-07-25 2021-06-01 浙江宇视科技有限公司 一种运动目标的检测方法和装置
CN108446694B (zh) * 2017-02-16 2020-11-27 杭州海康威视数字技术股份有限公司 一种目标检测方法及装置
CN109376572B (zh) * 2018-08-09 2022-05-03 同济大学 基于深度学习的交通视频中实时车辆检测与轨迹跟踪方法
CN108985269B (zh) * 2018-08-16 2022-06-10 东南大学 基于卷积和空洞卷积结构的融合网络驾驶环境感知模型
CN109800716A (zh) * 2019-01-22 2019-05-24 华中科技大学 一种基于特征金字塔的海面遥感图像船舶检测方法
CN110210497B (zh) * 2019-05-27 2023-07-21 华南理工大学 一种鲁棒实时焊缝特征检测方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190156144A1 (en) * 2017-02-23 2019-05-23 Beijing Sensetime Technology Development Co., Ltd Method and apparatus for detecting object, method and apparatus for training neural network, and electronic device
CN110363211A (zh) * 2018-04-10 2019-10-22 北京四维图新科技股份有限公司 检测网络模型和目标检测方法
CN109977817A (zh) * 2019-03-14 2019-07-05 南京邮电大学 基于深度学习的动车组车底板螺栓故障检测方法
CN110378398A (zh) * 2019-06-27 2019-10-25 东南大学 一种基于多尺度特征图跳跃融合的深度学习网络改进方法
CN110751134A (zh) * 2019-12-23 2020-02-04 长沙智能驾驶研究院有限公司 目标检测方法、存储介质及计算机设备

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PEI WEI, XU YAN-MING; ZHU YONG-YING; WANG PENG-QIAN; LU MING-YU; LI FEI: "The Target Detection Method of Aerial Photography Images with Improved SSD", JOURNAL OF SOFTWARE, GAI KAN BIANJIBU, BEIJING, CN, vol. 30, no. 3, 1 January 2019 (2019-01-01), CN, pages 738 - 758, XP055824723, ISSN: 1000-9825, DOI: 10.13328/j.cnki.jos.005695 *
TJMTAOTAO: "基于IOU的单级目标检测算法 (Non-official translation: One-Stage Object Detection Algorithm Based on IOU)", CSDN BLOG: HTTPS://BLOG.CSDN.NET/TJMTAOTAO/ARTICLE/DETAILS/103532731, 13 December 2019 (2019-12-13) *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505834A (zh) * 2021-07-13 2021-10-15 阿波罗智能技术(北京)有限公司 训练检测模型、确定图像更新信息和更新高精地图的方法
CN113743333A (zh) * 2021-09-08 2021-12-03 苏州大学应用技术学院 一种草莓熟度识别方法及装置
CN113743333B (zh) * 2021-09-08 2024-03-01 苏州大学应用技术学院 一种草莓熟度识别方法及装置
CN114067222A (zh) * 2022-01-17 2022-02-18 航天宏图信息技术股份有限公司 一种城市水体遥感分类方法及装置
CN114067222B (zh) * 2022-01-17 2022-04-29 航天宏图信息技术股份有限公司 一种城市水体遥感分类方法及装置
CN114170575A (zh) * 2022-02-11 2022-03-11 青岛海尔工业智能研究院有限公司 一种火焰识别方法、装置、电子设备及存储介质
CN114913094A (zh) * 2022-06-07 2022-08-16 中国工商银行股份有限公司 图像修复方法、装置、计算机设备、存储介质和程序产品
CN115239946B (zh) * 2022-06-30 2023-04-07 锋睿领创(珠海)科技有限公司 小样本迁移学习训练、目标检测方法、装置、设备和介质
CN115239946A (zh) * 2022-06-30 2022-10-25 锋睿领创(珠海)科技有限公司 小样本迁移学习训练、目标检测方法、装置、设备和介质
CN115082801B (zh) * 2022-07-27 2022-10-25 北京道达天际科技股份有限公司 一种基于遥感图像的飞机型号识别系统和方法
CN115082801A (zh) * 2022-07-27 2022-09-20 北京道达天际科技股份有限公司 一种基于遥感图像的飞机型号识别系统和方法
CN115272779A (zh) * 2022-09-28 2022-11-01 广东顺德工业设计研究院(广东顺德创新设计研究院) 液滴识别方法、装置、计算机设备和存储介质
CN117037173A (zh) * 2023-09-22 2023-11-10 武汉纺织大学 一种二阶段的英文字符检测与识别方法及系统
CN117037173B (zh) * 2023-09-22 2024-02-27 武汉纺织大学 一种二阶段的英文字符检测与识别方法及系统
CN117152422A (zh) * 2023-10-31 2023-12-01 国网湖北省电力有限公司超高压公司 一种紫外图像无锚框目标检测方法及存储介质、电子设备
CN117152422B (zh) * 2023-10-31 2024-02-13 国网湖北省电力有限公司超高压公司 一种紫外图像无锚框目标检测方法及存储介质、电子设备
US12125266B1 (en) 2023-10-31 2024-10-22 State Grid Hubei Extra High Voltage Company Anchor-free object detection method based on ultraviolet image, storage medium and electrical equipment

Also Published As

Publication number Publication date
CN110751134A (zh) 2020-02-04
CN110751134B (zh) 2020-05-12

Similar Documents

Publication Publication Date Title
WO2021129691A1 (zh) 一种对目标检测方法以及相应装置
US11315253B2 (en) Computer vision system and method
WO2020221013A1 (zh) 一种图像处理方法、装置、电子设备以及存储介质
US11657602B2 (en) Font identification from imagery
CN108830285B (zh) 一种基于Faster-RCNN的加强学习的目标检测方法
WO2018003212A1 (ja) 物体検出装置及び物体検出方法
US11538244B2 (en) Extraction of spatial-temporal feature representation
CN111461083A (zh) 基于深度学习的快速车辆检测方法
CN108121997A (zh) 使用机器学习模型的图像数据中的对象分类
CN110765860A (zh) 摔倒判定方法、装置、计算机设备及存储介质
CN112699937A (zh) 基于特征引导网络的图像分类与分割的装置、方法、设备及介质
CN111523553A (zh) 一种基于相似度矩阵的中心点网络多目标检测方法
CN113435594B (zh) 安防检测模型训练方法、装置、设备及存储介质
CN113496150B (zh) 密集目标检测方法、装置、存储介质及计算机设备
CN109598298B (zh) 图像物体识别方法和系统
CN116071309B (zh) 元器件的声扫缺陷检测方法、装置、设备和存储介质
CN111353544A (zh) 一种基于改进的Mixed Pooling-YOLOV3目标检测方法
CN113344110B (zh) 一种基于超分辨率重建的模糊图像分类方法
AU2021354030B2 (en) Processing images using self-attention based neural networks
CN110782430A (zh) 一种小目标的检测方法、装置、电子设备及存储介质
US20230154005A1 (en) Panoptic segmentation with panoptic, instance, and semantic relations
CN118172318A (zh) 基于改进的YOLOv5模型的电子元件缺陷检测方法
CN113743521B (zh) 一种基于多尺度上下文感知的目标检测方法
CN111709338B (zh) 一种用于表格检测的方法、装置及检测模型的训练方法
CN113822871A (zh) 基于动态检测头的目标检测方法、装置、存储介质及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20907189

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20907189

Country of ref document: EP

Kind code of ref document: A1