WO2021164168A1 - 一种图像数据的目标检测方法及相关装置 - Google Patents

一种图像数据的目标检测方法及相关装置 Download PDF

Info

Publication number
WO2021164168A1
WO2021164168A1 PCT/CN2020/098445 CN2020098445W WO2021164168A1 WO 2021164168 A1 WO2021164168 A1 WO 2021164168A1 CN 2020098445 W CN2020098445 W CN 2020098445W WO 2021164168 A1 WO2021164168 A1 WO 2021164168A1
Authority
WO
WIPO (PCT)
Prior art keywords
result
fully connected
network
regression
target detection
Prior art date
Application number
PCT/CN2020/098445
Other languages
English (en)
French (fr)
Inventor
张润泽
郭振华
吴楠
赵雅倩
Original Assignee
苏州浪潮智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州浪潮智能科技有限公司 filed Critical 苏州浪潮智能科技有限公司
Publication of WO2021164168A1 publication Critical patent/WO2021164168A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • This application relates to the field of image processing technology, and in particular to a target detection method, target detection device, server, and computer-readable storage medium for image data.
  • the current mainstream general target detection technology is mainly divided into single-stage target detection technology and two-stage target detection technology.
  • Single-stage target detection does not generate initial candidate frames, but directly generates the category probability and position coordinate value of the object. After a single detection, the final detection result can be directly obtained, so it has a faster detection speed; the two-stage method is divided There are two stages. The first stage is to manually set an anchor frame for each pixel of the image to generate an initial candidate frame, and the second stage is to further modify the initial candidate frame. Since the two stages go through a process from coarse to fine, the accuracy is relatively high, but the detection speed is slow.
  • the purpose of this application is to provide a target detection method, target detection device, server, and computer-readable storage medium for image data.
  • this application provides a method for target detection of image data, including:
  • a convolutional network and a fully connected network to perform detection processing on the image to be detected according to the initial candidate frame, respectively, to obtain a convolutional class result, a convolution regression result, a fully connected classification result, and a fully connected regression result;
  • the convolutional integration class result, the convolution regression result, the fully connected classification result, and the fully connected regression result are screened by a score function to obtain a classification result and a regression result.
  • an anchorless frame target detection network to process the image to be detected to obtain the initial candidate frame, including:
  • the anchorless frame target detection network is used to process the image to be detected to obtain the initial candidate frame; wherein, the anchorless frame target detection network is a network obtained by training using an RPN loss function.
  • an anchorless frame target detection network to process the image to be detected to obtain the initial candidate frame, including:
  • the anchorless frame target detection network is used to process the image to be detected to obtain the initial candidate frame; wherein, the anchorless frame target detection network is a network obtained by training using the central point RPN loss.
  • a convolutional network and a fully connected network are used to perform detection processing on the to-be-detected image according to the initial candidate frame, respectively, to obtain a convolutional class result, a convolution regression result, a fully connected classification result, and a fully connected regression result, respectively ,include:
  • the convolutional network is used to perform detection processing on the to-be-detected image according to the initial candidate frame to obtain the convolutional class result and the convolution regression result; wherein, the convolutional network consists of 3 residual modules and 2 Two non-local convolution modules are cross-connected to obtain;
  • the fully connected network is used to perform detection processing on the to-be-detected image according to the initial candidate frame to obtain the fully connected classification result and the fully connected regression result.
  • it also includes:
  • the fully connected loss is used to train according to the training data to obtain the fully connected network.
  • the convolutional class result, the convolution regression result, the fully connected classification result, and the fully connected regression result are screened by a score function to obtain the classification result and the regression result, including:
  • the score of the convolutional regression result, the score of the convolution regression result, the score of the fully connected classification result, and the score of the fully connected regression result are checked, and they will meet the preset score.
  • the result of the scoring standard is used as the classification result and the regression result.
  • the present application also provides a target detection device for image data, including:
  • the anchorless frame processing module is used to process the image to be detected by adopting the anchorless frame target detection network to obtain the initial candidate frame;
  • the classification regression module is used to detect and process the image to be detected according to the initial candidate frame by using a convolutional network and a fully connected network, respectively, to obtain a convolutional class result, a convolution regression result, a fully connected classification result, and a fully connected Regression result
  • the result screening module is used to screen the convolutional class result, the convolution regression result, the fully connected classification result, and the fully connected regression result according to a preset score function to obtain classification results and regression results.
  • the anchorless frame processing module includes:
  • a training unit configured to use the anchorless frame target detection network to process the to-be-detected image to obtain the initial candidate frame
  • the anchorless frame detection unit is used to train the anchorless frame target detection network by using the RPN loss function.
  • the classification regression module includes:
  • the convolution processing unit is configured to use the convolution network to perform detection processing on the image to be detected according to the initial candidate frame to obtain the convolution class result and the convolution regression result; wherein the convolution network is composed of 3 residual modules and 2 non-local convolution modules are cross-connected to obtain;
  • the fully connected processing unit is configured to use the fully connected network to perform detection processing on the to-be-detected image according to the initial candidate frame to obtain the fully connected classification result and the fully connected regression result.
  • This application also provides a server, including:
  • Memory used to store computer programs
  • the processor is used to implement the steps of the target detection method as described above when the computer program is executed.
  • the present application also provides a computer-readable storage medium having a computer program stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the target detection method as described above are realized.
  • An image data target detection method provided by the present application includes: using an anchorless frame target detection network to process the image to be detected to obtain an initial candidate frame; respectively using a convolutional network and a fully connected network to pair according to the initial candidate frame
  • the image to be detected is subjected to detection processing to obtain convolutional integration class results, convolution regression results, fully connected classification results, and fully connected regression results; the convolutional class results, the convolution regression results, and the results of the convolution regression are obtained through a score function.
  • the fully connected classification result and the fully connected regression result are screened to obtain the classification result and the regression result.
  • the image to be detected is processed through the anchorless frame target detection network to obtain the initial candidate frame, instead of using manual or other detection algorithms to identify the initial candidate frame in the two-stage target detection process, and then use the convolutional network and the fully connected network Perform detection processing on the image to be detected according to the initial candidate frame respectively, and obtain the result corresponding to the convolutional network and the result corresponding to the fully connected network, and select the optimal detection result from all the results, and obtain the classification result and the regression result, which is
  • the fusion of the anchorless frame target detection method and the two-stage detection method improves the efficiency of the two-stage target detection while ensuring the accuracy and precision of the target detection algorithm.
  • This application also provides a target detection device, a server, and a computer-readable storage medium for image data, which have the above beneficial effects, and will not be repeated here.
  • FIG. 1 is a flowchart of a method for target detection of image data provided by an embodiment of the application
  • FIG. 2 is a schematic structural diagram of an image data target detection apparatus provided by an embodiment of the application.
  • the core of this application is to provide a target detection method, target detection device, server, and computer-readable storage medium for image data.
  • single-stage target detection does not generate initial candidate frames, but directly generates the category probability and position coordinate value of the object. After a single detection, the final detection result can be directly obtained, so it has a faster detection speed; the two-stage method is divided There are two stages. The first stage is to manually set an anchor frame for each pixel of the image to generate an initial candidate frame, and the second stage is to further modify the initial candidate frame. Since the two stages go through a process from coarse to fine, the accuracy is relatively high, but the detection speed is slow.
  • this application provides a target detection method for image data.
  • the image to be detected is first processed through an anchorless frame target detection network to obtain the initial candidate frame, instead of using manual or other detection algorithms to identify the target in the two-stage target detection process.
  • Obtain the initial candidate frame and then use the convolutional network and the fully connected network to detect the image to be detected according to the initial candidate frame respectively, and obtain the result corresponding to the convolutional network and the result corresponding to the fully connected network, and select from all the results
  • the optimal detection result, the classification result and the regression result are obtained, that is, the non-anchor frame target detection method and the two-stage detection method are merged to improve the efficiency of the two-stage target detection while ensuring the accuracy and accuracy of the target detection algorithm. Accuracy.
  • FIG. 1 is a flowchart of a method for object detection of image data provided by an embodiment of the application.
  • the method may include:
  • This step aims to use the anchorless frame target detection network to process the image to be detected to obtain the initial candidate frame. That is, the anchorless frame target detection network is used to roughly identify the target.
  • the detection process in this step does not require high-precision target detection, but only needs to ensure the efficiency and speed of the detection process.
  • this embodiment implements the convolution operation on each pixel on the feature map through step S101, and finally each pixel can be judged whether it is foreground or background, and return to the corresponding The coordinate of the target detection frame is the initial candidate frame in this step. Further, when step S101 in this embodiment is compared with the existing first-order detection method, in this step, since it is necessary to distinguish the foreground and the background, there is no need to perform a classification operation, which effectively improves the efficiency of obtaining the initial candidate frame. At the same time, the other two-stage target detection method is to preset anchor frames with different aspect ratios and different areas for each pixel.
  • the number of candidate frames for each pixel in the feature map is K, then the total candidate of an image
  • the number of frames is height ⁇ width ⁇ K, and then these candidate frames are filtered through sampling strategies.
  • a large number of anchor frames will undoubtedly increase the time complexity.
  • the foreground and the background can be quickly distinguished through step S101 to obtain the initial candidate frame, which improves the efficiency and reduces the time cost.
  • a central point confidence branch can be added to the anchorless frame target detection network.
  • the anchorless frame target detection network in this step is a network that has been trained in advance, and different loss functions can be used for training in order to improve the training accuracy of the network.
  • the RPN loss function can be used for training. Accordingly, this step can include:
  • the anchorless frame target detection network is used to process the image to be detected to obtain the initial candidate frame; among them, the anchorless frame target detection network is a network trained by using the RPN loss function.
  • RPN Registered Proposal Network
  • RPN refers to the regional generation network, which can improve the progress and accuracy of the initial candidate frame.
  • the center point of the detection can be determined in advance to improve the efficiency of the detection.
  • this step can include:
  • Step 1 Introduce the center point loss into the RPN loss to obtain the center point RPN loss;
  • Step 2 Use an anchorless frame target detection network to process the image to be detected to obtain an initial candidate frame; wherein, the anchorless frame target detection network is a network obtained by training using a central point RPN loss.
  • the central point is introduced into the RPN loss, mainly to determine the approximate area for the RPN loss network to process, so as to improve the efficiency of the detection process.
  • this step aims to perform final target detection processing on the image to be detected according to the initial candidate frame through the convolutional network and the fully connected network, and obtain the result corresponding to the convolutional network and the result corresponding to the fully connected network. That is, each network will get the classification result and the regression result after the detection processing.
  • the accuracy and accuracy of the classification results and regression results of each network are also different.
  • the regression results and classification results in each network select the best results as the final result.
  • the classification and regression tasks in the second stage are all implemented in a fully connected manner.
  • the single fully connected method is likely to cause large deviations in the classification results or regression results, reducing accuracy and precision. Therefore, in this embodiment, a hybrid method of a convolutional network and a fully connected network is adopted for operation to improve accuracy and precision.
  • the convolutional network and the fully connected network can also be assigned different tasks.
  • convolutional networks perform classification tasks
  • fully connected networks perform regression tasks.
  • appropriate tasks are performed according to the characteristics of the convolutional network and the fully connected network, that is, the fully connected network performs classification tasks, and the convolutional network performs regression tasks, in order to improve the final network execution effect.
  • any network structure of a convolutional network provided by the prior art or a network structure of a fully connected network that can be selected in this step is not specifically limited.
  • this step can include:
  • the convolutional network is used to detect and process the image to be detected according to the initial candidate frame to obtain convolutional class results and convolution regression results; among them, the convolutional network is obtained by cross-connection of 3 residual modules and 2 non-local convolution modules;
  • the fully-connected network is used to detect and process the image to be detected according to the initial candidate frame, and the fully-connected classification result and the fully-connected regression result are obtained.
  • the network result of the convolutional network is mainly explained further. That is, the convolutional network is obtained by cross-connecting 3 residual modules and 2 non-local convolution modules. Among them, both the residual module and the non-local convolution module can use the residual module and the non-local convolution module provided in the prior art, and there is no specific limitation here.
  • S103 Screening the convolutional class result, the convolution regression result, the fully connected classification result, and the fully connected regression result through the score function, to obtain the classification result and the regression result.
  • this step aims to screen all the classification results and regression results obtained, and obtain the final classification results and regression results.
  • the process of screening may be to calculate a prediction score for each result, and use the classification result and regression result with the highest score as the final output classification result and regression result of this embodiment.
  • this step can include:
  • the score of the convolutional regression result, the score of the convolution regression result, the score of the fully connected classification result, and the score of the fully connected regression result are checked, and they will meet the preset score.
  • the result of the scoring standard is used as the classification result and the regression result.
  • this embodiment may also include:
  • the fully connected loss is used for training based on the training data to obtain a fully connected network.
  • this embodiment mainly illustrates that the convolutional loss and the fully connected loss are used to obtain the convolutional network and the fully connected network, respectively.
  • the specific training process can use any network training method provided by the prior art, and will not be repeated here.
  • this embodiment uses the anchorless frame target detection network to first process the image to be detected to obtain the initial candidate frame, instead of using manual or other detection algorithms to identify the initial candidate frame in the two-stage target detection process, and then use the volume
  • the product network and the fully connected network respectively perform detection processing on the image to be detected according to the initial candidate frame, and obtain the corresponding result of the convolutional network and the corresponding result of the fully connected network, and screen all the results to select the best detection result, and obtain the classification result
  • the regression results that is, the fusion of the anchorless frame target detection method and the two-stage detection method, which improves the efficiency of the two-stage target detection while ensuring the accuracy and precision of the target detection algorithm.
  • the method of this embodiment mainly uses a target detection algorithm to perform a recognition operation on image data, and the overall implementation is based on a deep neural network. Therefore, this embodiment first introduces the network structure applied in this embodiment.
  • the target detection network structure adopted in this embodiment includes an anchorless frame network and a Double Head network frame connected to the anchorless frame network.
  • the Double Head network frame includes a convolutional network and a fully connected network.
  • the anchorless frame network adopts a single-stage network framework, that is, the features are extracted through the backbone network, and then the feature pyramid is used for multi-scale feature description, and finally the target frame classification and regression tasks are performed.
  • the classification function due to the imbalance of positive and negative samples, the classification function usually adopts Focal Loss. Because the design method without anchor frame is adopted, compared with the two-stage manual design of anchor frame, the recall rate of the target frame is lower, and the processing efficiency and speed are faster. Finally, the target frame of the single-stage training is used as the candidate frame of the second stage, and then further training is performed.
  • this patent uses the Double Head network framework to implement it.
  • Double Head method both the convolution branch and the fully connected branch can produce classification and regression results, but the classification mainly uses the fully connected branch results, and the regression mainly uses the convolution branch results.
  • this patent adopts a cross-connection method of the residual module and the non-local convolution module.
  • the residual module draws on the ResNet residual block method
  • the non-local convolution module draws on the NL Network (Non-Local Network non-local convolution network) method
  • the non-local module gets rid of the local limitation of the previous convolution Draw lessons from the idea of traditional filtering, so that the feature map can be affected by more distant feature maps.
  • the loss function adopted by each network is also improved in this embodiment.
  • the loss function of this implementation is divided into three parts, Double-Head provides convolution loss and fully connected loss, and single-stage network provides RPN (RegionProposalNetwork) loss.
  • the RPN in this embodiment is the candidate frame generation network in the two-stage target detection network of this embodiment.
  • L is the overall network loss
  • L fc is the fully connected network loss
  • L conv is the convolutional network loss
  • L rpn is the RPN loss
  • C_loss is the center loss.
  • L cls is the classification loss of RPN
  • Focal Loss a loss function for unbalanced sample distribution
  • L reg is the regression loss of RPN
  • IOU Loss Intersection over Union Loss target frame coordinate regression loss function
  • N pos represents the number of positive samples
  • represents the balance factor of the regression loss, which can be set to 1 in this embodiment.
  • is an indicator function, which means that only positive samples will calculate the regression loss.
  • p x,y is the classification score, Is the sample label.
  • t x,y are the coordinates of the detection frame of the regression, Is the Ground Truth of the sample coordinates.
  • L cls is Focal Loss (FL for short)
  • the specific function form is as follows, p t represents the probability of whether the detection frame is the foreground, and ⁇ and ⁇ t are parameters used to control the unbalanced sample.
  • L reg is IOU Loss (IL for short), and the specific function form is as follows.
  • I Intersection
  • U Union
  • this embodiment introduces a center point loss.
  • l* represents the distance from the center to the left of the detection frame
  • r * represents the distance from the center to the right of the detection frame
  • t * represents the distance from the center to the top of the detection frame
  • b * represents the center The distance between the point and the bottom of the detection frame.
  • the loss function used in this embodiment is different from the general classification loss and regression loss.
  • the distinction is made according to convolution and full connection.
  • the convolution loss and the fully connected loss are as follows.
  • ⁇ conv and ⁇ fc are used to control the proportion of the classification loss and the regression loss in the convolution loss and the fully connected loss, respectively.
  • ⁇ conv represents the proportion of regression loss in the convolution loss
  • 1- ⁇ conv represents the proportion of classification loss in the convolution loss
  • ⁇ fc represents the proportion of classification loss in the fully connected loss
  • 1- ⁇ fc represents the regression loss in the fully connected loss.
  • Loss is also a regression loss function of target frame coordinates.
  • COCO Target Detection Standard Open Data Set
  • SGD Spochastic Gradient Descent Stochastic Gradient Descent
  • the final output of the network is the probability that the candidate frame is a certain category, which is called the prediction score s in this embodiment. Since both the fully connected branch and the convolution branch will produce prediction scores, the final prediction score is shown in the following formula:
  • s fc is the prediction score of the fully connected network
  • s conv is the prediction score of the convolutional network
  • this embodiment uses the anchorless frame target detection network to process the image to be detected first to obtain the initial candidate frame, instead of using manual or other detection algorithms to identify the initial candidate frame in the two-stage target detection process, and then use convolution
  • the network and the fully connected network respectively perform detection processing on the image to be detected according to the initial candidate frame, and obtain the result corresponding to the convolutional network and the result corresponding to the fully connected network. All the results are screened to select the best detection result, and the classification result and The regression result is the fusion of the anchorless frame target detection method and the two-stage detection method, which not only improves the efficiency of the two-stage target detection, but also ensures the accuracy and precision of the target detection algorithm.
  • the target detection device for image data described below and the target detection method for image data described above may correspond to each other and refer to each other.
  • FIG. 2 is a schematic structural diagram of an image data target detection apparatus provided by an embodiment of the application.
  • the device may include:
  • the anchorless frame processing module 100 is configured to use an anchorless frame target detection network to process the image to be detected to obtain an initial candidate frame;
  • the classification regression module 200 is configured to use a convolutional network and a fully connected network to perform detection processing on the image to be detected according to the initial candidate frame, respectively, to obtain a convolutional class result, a convolution regression result, a fully connected classification result, and a fully connected regression result;
  • the result screening module 300 is used for screening the convolutional class result, the convolution regression result, the fully connected classification result, and the fully connected regression result according to the preset score function to obtain the classification result and the regression result.
  • the anchorless frame processing module 100 may include:
  • the training unit is used to process the image to be detected by adopting the anchorless frame target detection network to obtain the initial candidate frame;
  • the anchorless frame detection unit is used to train the anchorless frame target detection network by using the RPN loss function.
  • classification regression module 200 may include:
  • the convolution processing unit is used to detect and process the image to be detected according to the initial candidate frame by using the convolution network to obtain convolutional class results and convolution regression results; among them, the convolution network consists of 3 residual modules and 2 non-local convolutions Obtained by cross-connection of product modules;
  • the fully-connected processing unit is configured to use the fully-connected network to perform detection processing on the image to be detected according to the initial candidate frame to obtain a fully-connected classification result and a fully-connected regression result.
  • the embodiment of the present application also provides a server, including:
  • Memory used to store computer programs
  • the processor is used to implement the steps of the target detection method as described above when the computer program is executed.
  • the embodiments of the present application also provide a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the target detection method described above are implemented.
  • the computer-readable storage medium may include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk, etc., which can store program codes Medium.
  • the steps of the method or algorithm described in the embodiments disclosed in this document can be directly implemented by hardware, a software module executed by a processor, or a combination of the two.
  • the software module can be placed in random access memory (RAM), internal memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disks, removable disks, CD-ROMs, or all areas in the technical field. Any other known storage media.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

一种图像数据的目标检测方法,包括:采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框;分别采用卷积网络和全连接网络根据所述初始候选框对所述待检测图像进行检测处理,分别得到卷积分类结果、卷积回归结果、全连接分类结果和全连接回归结果;通过得分函数对所述卷积分类结果、所述卷积回归结果、所述全连接分类结果和所述全连接回归结果进行筛选,得到分类结果和回归结果。通过将无锚框目标检测算法和两阶段目标检测算法进行结合,在保证目标检测效率的前提下提高目标检测的精度和准确率。本申请还公开了一种图像数据的目标检测装置、服务器以及计算机可读存储介质,具有以上有益效果。

Description

一种图像数据的目标检测方法及相关装置
本申请要求于2020年02月20日提交中国专利局、申请号为202010106107.X、发明名称为“一种图像数据的目标检测方法及相关装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像处理技术领域,特别涉及一种图像数据的目标检测方法、目标检测装置、服务器以及计算机可读存储介质。
背景技术
随着信息技术的不断发展,采用计算机可以处理越来越多的复杂任务。其中,就包括计算机视觉技术,即通过计算机对图像进行处理识别出图像中的内容。在计算机视觉技术中首先被考虑的就是目标检测技术,在计算机视觉领域处于相当重要的地位,是计算机视觉的基础领域,对于分割、跟踪等其它视觉任务也具有一定的启发意义。
目前主流通用目标检测技术主要分为单阶段的目标检测技术及两阶段的目标检测技术。单阶段目标检测不产生初始候选框,而是直接产生物体的类别概率和位置坐标值,经过单次检测即可直接得到最终的检测结果,因此有着更快的检测速度;两阶段方法则是分为两个阶段,第一阶段为图像每个像素点人工设置锚框,产生初始候选框,第二阶段则是对初始候选框进行进一步修正。由于两阶段经历一个由粗到精的过程,因而精度相对较高,但是检测速度较慢。
因此,如何在保持目标检测精度的情况下提高两阶段目标检测过程的速度是本领域技术人员关注的重点问题。
发明内容
本申请的目的是提供一种图像数据的目标检测方法、目标检测装置、服务器以及计算机可读存储介质,通过将无锚框目标检测算法和两阶段目标检测算法进行结合,在保证目标检测效率的前提下提高目标检测的精度 和准确率。
为解决上述技术问题,本申请提供一种图像数据的目标检测方法,包括:
采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框;
分别采用卷积网络和全连接网络根据所述初始候选框对所述待检测图像进行检测处理,分别得到卷积分类结果、卷积回归结果、全连接分类结果和全连接回归结果;
通过得分函数对所述卷积分类结果、所述卷积回归结果、所述全连接分类结果和所述全连接回归结果进行筛选,得到分类结果和回归结果。
可选的,采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框,包括:
采用所述无锚框目标检测网络对所述待检测图像进行处理,得到所述初始候选框;其中,所述无锚框目标检测网络为采用RPN损失函数训练得到的网络。
可选的,采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框,包括:
将中心点损失引入RPN损失中,得到中心点RPN损失;
采用所述无锚框目标检测网络对所述待检测图像进行处理,得到所述初始候选框;其中,所述无锚框目标检测网络为采用所述中心点RPN损失训练得到的网络。
可选的,分别采用卷积网络和全连接网络根据所述初始候选框对所述待检测图像进行检测处理,分别得到卷积分类结果、卷积回归结果、全连接分类结果和全连接回归结果,包括:
采用所述卷积网络根据所述初始候选框对所述待检测图像进行检测处理,得到所述卷积分类结果、所述卷积回归结果;其中,卷积网络由3个残差模块和2个非局部卷积模块交叉连接获得;
采用所述全连接网络根据所述初始候选框对所述待检测图像进行检测处理,得到所述全连接分类结果和所述全连接回归结果。
可选的,还包括:
采用卷积损失根据训练数据进行训练,得到所述卷积网络;
采用全连接损失根据训练数据进行训练,得到所述全连接网络。
可选的,通过得分函数对所述卷积分类结果、所述卷积回归结果、所述全连接分类结果和所述全连接回归结果进行筛选,得到分类结果和回归结果,包括:
根据得分函数对所述卷积分类结果、所述卷积回归结果、所述全连接分类结果和所述全连接回归结果进行计算,得到分别得到所述卷积分类结果的得分、所述卷积回归结果的得分、所述全连接分类结果的得分和所述全连接回归结果的得分;
根据预设得分标准对所述卷积分类结果的得分、所述卷积回归结果的得分、所述全连接分类结果的得分和所述全连接回归结果的得分进行检查,将符合所述预设得分标准的结果作为分类结果和回归结果。
本申请还提供一种图像数据的目标检测装置,包括:
无锚框处理模块,用于采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框;
分类回归模块,用于分别采用卷积网络和全连接网络根据所述初始候选框对所述待检测图像进行检测处理,分别得到卷积分类结果、卷积回归结果、全连接分类结果和全连接回归结果;
结果筛选模块,用于根据预设得分函数对所述卷积分类结果、所述卷积回归结果、所述全连接分类结果和所述全连接回归结果进行筛选,得到分类结果和回归结果。
可选的,所述无锚框处理模块,包括:
训练单元,用于采用所述无锚框目标检测网络对所述待检测图像进行处理,得到所述初始候选框;
无锚框检测单元,用于采用RPN损失函数训练得到所述无锚框目标检测网络。
可选的,所述分类回归模块,包括:
卷积处理单元,用于采用所述卷积网络根据所述初始候选框对所述待检测图像进行检测处理,得到所述卷积分类结果、所述卷积回归结果;其 中,卷积网络由3个残差模块和2个非局部卷积模块交叉连接获得;
全连接处理单元,用于采用所述全连接网络根据所述初始候选框对所述待检测图像进行检测处理,得到所述全连接分类结果和所述全连接回归结果。
本申请还提供一种服务器,包括:
存储器,用于存储计算机程序;
处理器,用于执行所述计算机程序时实现如上所述的目标检测方法的步骤。
本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的目标检测方法的步骤。
本申请所提供的一种图像数据的目标检测方法,包括:采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框;分别采用卷积网络和全连接网络根据所述初始候选框对所述待检测图像进行检测处理,分别得到卷积分类结果、卷积回归结果、全连接分类结果和全连接回归结果;通过得分函数对所述卷积分类结果、所述卷积回归结果、所述全连接分类结果和所述全连接回归结果进行筛选,得到分类结果和回归结果。
先通过无锚框目标检测网络对待检测图像进行处理,得到初始候选框,而不是在两阶段目标检测过程中采用人工或者其他检测算法识别得到该初始候选框,然后采用卷积网络和全连接网络分别根据初始候选框对待检测图像进行检测处理,得到卷积网络对应的结果和全连接网络对应的结果,在所有结果中进行筛选选择出最优的检测结果,得到分类结果和回归结果,也就是将无锚框目标检测方法和两阶段检测方法进行融合,在提高两阶段目标检测的效率的同时,保证了目标检测算法的准确率和精度。
本申请还提供一种图像数据的目标检测装置、服务器以及计算机可读存储介质,具有以上有益效果,在此不做赘述。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对 实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据提供的附图获得其他的附图。
图1为本申请实施例所提供的一种图像数据的目标检测方法的流程图;
图2为本申请实施例所提供的一种图像数据的目标检测装置的结构示意图。
具体实施方式
本申请的核心是提供一种图像数据的目标检测方法、目标检测装置、服务器以及计算机可读存储介质,通过将无锚框目标检测算法和两阶段目标检测算法进行结合,在保证目标检测效率的前提下提高目标检测的精度和准确率。
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
现有技术中,通用目标检测技术主要分为单阶段的目标检测技术及两阶段的目标检测技术。单阶段目标检测不产生初始候选框,而是直接产生物体的类别概率和位置坐标值,经过单次检测即可直接得到最终的检测结果,因此有着更快的检测速度;两阶段方法则是分为两个阶段,第一阶段为图像每个像素点人工设置锚框,产生初始候选框,第二阶段则是对初始候选框进行进一步修正。由于两阶段经历一个由粗到精的过程,因而精度相对较高,但是检测速度较慢。虽然在现有的两阶段目标检测过程中会采用人为定义的不同尺度、不同长宽比的锚框,以便在一定程度上抑制正负样本失衡的问题。但是,当数量增加训练强度增加后,仍然会存在一定程度上的样本不平衡问题。同时,人工和机器的配合增加了检测的复杂度, 也会增加整体过程的时间成本。
因此,本申请提供了一种图像数据的目标检测方法,先通过无锚框目标检测网络对待检测图像进行处理,得到初始候选框,而不是在两阶段目标检测过程中采用人工或者其他检测算法识别得到该初始候选框,然后采用卷积网络和全连接网络分别根据初始候选框对待检测图像进行检测处理,得到卷积网络对应的结果和全连接网络对应的结果,在所有结果中进行筛选选择出最优的检测结果,得到分类结果和回归结果,也就是将无锚框目标检测方法和两阶段检测方法进行融合,在提高两阶段目标检测的效率的同时,保证了目标检测算法的准确率和精度。
请参考图1,图1为本申请实施例所提供的一种图像数据的目标检测方法的流程图。
本实施例中,该方法可以包括:
S101,采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框;
本步骤旨在采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框。也就是,采用无锚框目标检测网络先将目标进行大致的识别,本步骤中的检测过程不需要高精度的目标检测,只需要保证检测过程的效率和速度即可。
对比现有的两阶段目标检测方法,本实施例通过步骤S101实现对特征图上的每一个像素点进行卷积操作,最终对每一个像素点都可以判断是否为前景或者背景,并回归出对应的目标检测框坐标,即本步骤中的初始候选框。进一步的,本实施例中的步骤S101对比现有的一阶检测方式时,本步骤中由于是需要分辨前景与背景,无需进行分类操作,有效的提高了获取初始候选框的效率。同时,另外的两阶段目标检测方法是对每个像素点预设长宽比不同、面积不同的锚框,特征图中每个像素点的候选框数量是K,则一张图像的总的候选框数量是高×宽×K,然后通过采样策略对这些候选框进行筛选,数量庞大的锚框无疑会增大时间复杂度。而本实施例中通过步骤S101即可快速的分辨前景与背景,得到初始候选框,提高了效率, 降低了时间成本。
此外,还可以为了抑制正负样本不平衡的问题,在无锚框目标检测网络加入了中心点置信度分支。可以想到的是,本步骤中的无锚框目标检测网络是提前被训练好的网络,其中,为了提高网络的训练精度可以采用不同的损失函数进行训练。
具体的,本实施例中可以采用RPN损失函数进行训练,相应的,本步骤可以包括:
采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框;其中,无锚框目标检测网络为采用RPN损失函数训练得到的网络。
其中,RPN(Region Proposal Network)指的是区域生成网络,可以提高初始候选框的进度和准确性。
进一步,为了提高本步骤中的检测处理的效率和速度,可以预先确定检测的中心点,提高检测的效率。
可选的,本步骤可以包括:
步骤1,将中心点损失引入RPN损失中,得到中心点RPN损失;
步骤2,采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框;其中,无锚框目标检测网络为采用中心点RPN损失训练得到的网络。
本可选方案中,将中心点引入RPN损失中,主要是给RPN损失网络确定进行处理的大致区域,以便提高检测过程的效率。
S102,分别采用卷积网络和全连接网络根据初始候选框对待检测图像进行检测处理,分别得到卷积分类结果、卷积回归结果、全连接分类结果和全连接回归结果;
在S101的基础上,本步骤旨在通过卷积网络和全连接网络根据初始候选框对待检测图像进行最终的目标检测处理,得到卷积网络对应的结果以及全连接网络对应的结果。即每个网络进行检测处理后均会得到分类结果和回归结果。但是每个网络的分类结果和回归结果的精度和准确性也都不相同,本实施例中为了提高精度和准确性,在各个网络中的回归结果和分类结果挑选出最优的结果作为最终的结果。
在现有技术中,第二阶段的分类和回归任务均采用全连接的方式实现。 但是,单一全连接的方式容易造成分类结果或回归结果偏差较大,降低准确性和精度。因此,本实施例中采用卷积网络和全连接网络的混合方式进行操作,提高准确性和精度。
进一步的,还可以将卷积网络和全连接网络分配不同的任务。例如,卷积网络执行分类任务,全连接网络执行回归任务。在此基础上,根据卷积网络和全连接网络的特征执行合适的任务,即将全连接网络执行分类任务,卷积网络执行回归任务,以便提高最终的网络执行效果。
一般的,本步骤中可以选的现有技术提供的任意一种卷积网络的网络结构,或全连接网络的网络结构,不做具体限定。
但是,为了提高卷积网络的识别精度和准确性,本步骤可以包括:
采用卷积网络根据初始候选框对待检测图像进行检测处理,得到卷积分类结果、卷积回归结果;其中,卷积网络由3个残差模块和2个非局部卷积模块交叉连接获得;
采用全连接网络根据初始候选框对待检测图像进行检测处理,得到全连接分类结果和全连接回归结果。
可见,本可选方案中主要是对卷积网络的网络结果做进一步的说明。也就是,该卷积网络由3个残差模块和2个非局部卷积模块交叉连接得到的。其中,残差模块和非局部卷积模块均可采用现有技术提供的残差模块和非局部卷积模块,在此不做具体限定。
S103,通过得分函数对卷积分类结果、卷积回归结果、全连接分类结果和全连接回归结果进行筛选,得到分类结果和回归结果。
在S102的基础上,本步骤旨在将得到的所有分类结果和回归结果进行筛选,得到最终的分类结果和回归结果。其中,进行筛选的过程可以是对每个结果进行预测得分的计算,将得分最高的分类结果和回归结果,作为本实施例最终输出的分类结果和回归结果。
因此,本步骤可以包括:
根据得分函数对所述卷积分类结果、所述卷积回归结果、所述全连接分类结果和所述全连接回归结果进行计算,得到分别得到所述卷积分类结果的得分、所述卷积回归结果的得分、所述全连接分类结果的得分和所述 全连接回归结果的得分;
根据预设得分标准对所述卷积分类结果的得分、所述卷积回归结果的得分、所述全连接分类结果的得分和所述全连接回归结果的得分进行检查,将符合所述预设得分标准的结果作为分类结果和回归结果。
此外,本实施例还可以包括:
采用卷积损失根据训练数据进行训练,得到卷积网络;
采用全连接损失根据训练数据进行训练,得到全连接网络。
可见,本实施例中主要是说明采用卷积损失和全连接损失分别得到卷积网络和全连接网络。其中,具体的训练过程可以采用现有技术提供的任意一种网络的训练方式,在此不做赘述。
综上,本实施例通过无锚框目标检测网络对待检测图像先进行处理,得到初始候选框,而不是在两阶段目标检测过程中采用人工或者其他检测算法识别得到该初始候选框,然后采用卷积网络和全连接网络分别根据初始候选框对待检测图像进行检测处理,得到卷积网络对应的结果和全连接网络对应的结果,在所有结果中进行筛选选择出最优的检测结果,得到分类结果和回归结果,也就是将无锚框目标检测方法和两阶段检测方法进行融合,在提高两阶段目标检测的效率的同时,保证了目标检测算法的准确率和精度。
以下通过另一具体的实施例,对本申请提供的一种图像数据的目标检测方法做进一步说明。
本实施例的方法主要是采用目标检测算法对图像数据进行识别操作,总体还是基于深度神经网络进行实现。因此,本实施例中首先介绍本实施例应用的网络结构。
本实施例中采用的目标检测网络结构包括无锚框网络,以及与该无锚框网络连接的Double Head网络框架,该Double Head网络框架包括卷积网络和全连接网络。
其中,无锚框网络采用了单阶段的网络框架,即先经过骨干网络提取特征,再利用特征金字塔进行了多尺度的特征描述,最终进行目标框分类 和回归任务。本实施例中,由于正负样本的不均衡,分类函数通常采用Focal Loss。由于采用的是无锚框的设计方式,相对于两阶段的人工设计锚框,目标框的召回率较低,并且进行处理的效率及速度较快。最后,将单阶段训练的目标框作为第二阶段的候选框,然后进行进一步的训练。
针对于目标检测任务中的分类和回归任务,本专利采用了Double Head网络框架的方式进行实现。采用Double Head的方式,卷积分支和全连接分支均能产生分类和回归结果,但是分类主要采用全连接分支结果,回归主要采用卷积分支结果。
特别地,对于坐标框回归所用的卷积,本专利采用了残差模块与非局部卷积模块交叉连接的方式。其中,残差模块借鉴了ResNet残差块的方式,非局部卷积模块借鉴了NL Network(Non-Local Network非局部卷积网络)的方式,非局部模块摆脱了以往卷积的局部性的限制,借鉴传统滤波的思想,使得特征图能够被更远的特征图所影响。
进一步,为了提高网络结构的训练效果,本实施例中还对每个网络采用的损失函数进行改进。本实施的损失函数分为三部分,Double-Head提供了卷积损失和全连接损失,单阶段网络提供了RPN(RegionProposalNetwork区域生成网络)损失。其中,本实施例中的RPN即为本实施例的两阶段目标检测网络中候选框生成网络。
其中,整体网络结构的损失如以下公式所示:
L=ω fcL fcconvL conv+L rpn+C_loss
其中,L为整体网络的损失,L fc为全连接网络损失,L conv为卷积网络损失,L rpn为RPN损失,C_loss为中心损失。
一般的可以将其中的系数定义为,ω fc=2.0,ω conv=2.5。
其中,RPN损失如以下公式所示:
Figure PCTCN2020098445-appb-000001
其中,L cls是RPN的分类损失,本实施例中采用的是Focal Loss(一种 针对样本分布不平衡的损失函数),L reg是RPN的回归损失,本实施例中采用的是IOU Loss(Intersection over Union Loss目标框坐标回归损失函数)。在公式里,N pos表示正样本的数量,λ表示回归损失的平衡因子,本实施例中可以设置为1。ρ是一个指示函数,表示只有正样本才会计算回归损失。p x,y为分类得分,
Figure PCTCN2020098445-appb-000002
为样本标签。t x,y为回归的检测框的坐标,
Figure PCTCN2020098445-appb-000003
为样本坐标的Ground Truth。
其中,L cls为Focal Loss(简称FL),具体的函数形式如下,p t表示检测框是否是前景的概率,γ和α t是用来控制不平衡样本所设的参数。
FL=-α t(1-p t) γlog(p t)
其中,L reg为IOU Loss(简称IL),具体的函数形式如下。其中,I(Intersection)表示检测框与Ground Truth的交集,U(Union)表示检测框与Ground Truth的并集。
Figure PCTCN2020098445-appb-000004
为了提高初始检测框的质量,本实施例引入中心点损失。
首先定义检测框中心距离边框的距离,l *表示中心点距离检测框左边的距离,r *表示中心点距离检测框右边的距离,t *表示中心点距离检测框上方的距离,b *表示中心点距离检测框下方的距离。中心点损失如下式所示:
Figure PCTCN2020098445-appb-000005
Figure PCTCN2020098445-appb-000006
Figure PCTCN2020098445-appb-000007
对于Double-Head损失函数,本实施例中采用的损失函数与一般的分类损失和回归损失不同,本实施例中是按照卷积及全连接进行区分的。其中卷积损失和全连接损失如下,此处采用λ conv及λ fc分别来控制卷积损失和全连接损失中分类损失和回归损失的占比。其中,λ conv表示卷积损失中的回归损失占比,1-λ conv表示卷积损失中的分类损失占比。其中,λ fc表示全连接损失中的分类损失占比,1-λ fc表示全连接损失中的回归损失。λ conv=0.8,λ fc=0.7。
Figure PCTCN2020098445-appb-000008
Figure PCTCN2020098445-appb-000009
其中,
Figure PCTCN2020098445-appb-000010
Figure PCTCN2020098445-appb-000011
采用的交叉熵损失,
Figure PCTCN2020098445-appb-000012
Figure PCTCN2020098445-appb-000013
采用的是
Figure PCTCN2020098445-appb-000014
损失,也是一种目标框坐标回归损失函数。
其中,
Figure PCTCN2020098445-appb-000015
损失具体为:
Figure PCTCN2020098445-appb-000016
进一步,具体的训练方式,以COCO(目标检测标准公开数据集)数据集为例,使用SGD(Stochastic Gradient Descent随机梯度下降法)进行训练,初始学习率为0.01。首先固定Double Head,训练带有center loss(中心点损失)的单阶段目标检测框架,然后再打开Double Head将整个网络结构同时进行训练。
预测时候则充分利用卷积和全连接都能产生分类和回归分支的特性。预测时,对于分类任务来说,网络的最终输出为候选框为某一个类别的概率,本实施例中称之为预测得分s。由于全连接分支和卷积分支都会产生预测得分,最终的预测得分按照下式所示:
score=s fc+s conv(1-s fc)
其中,s fc为全连接网络预测得分,s conv为卷积网络预测得分。
可见,本实施例通过无锚框目标检测网络对待检测图像先进行处理,得到初始候选框,而不是在两阶段目标检测过程中采用人工或者其他检测算法识别得到该初始候选框,然后采用卷积网络和全连接网络分别根据初始候选框对待检测图像进行检测处理,得到卷积网络对应的结果和全连接网络对应的结果,在所有结果中进行筛选选择出最优的检测结果,得到分类结果和回归结果,也就是将无锚框目标检测方法和两阶段检测方法进行融合,在提高两阶段目标检测的效率的同时,保证了目标检测算法的准确率和精度。
下面对本申请实施例提供的一种图像数据的目标检测装置进行介绍,下文描述的一种图像数据的目标检测装置与上文描述的一种图像数据的目标检测方法可相互对应参照。
请参考图2,图2为本申请实施例所提供的一种图像数据的目标检测装置的结构示意图。
本实施例中,该装置可以包括:
无锚框处理模块100,用于采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框;
分类回归模块200,用于分别采用卷积网络和全连接网络根据初始候选框对待检测图像进行检测处理,分别得到卷积分类结果、卷积回归结果、全连接分类结果和全连接回归结果;
结果筛选模块300,用于根据预设得分函数对卷积分类结果、卷积回归结果、全连接分类结果和全连接回归结果进行筛选,得到分类结果和回归结果。
可选的,该无锚框处理模块100,可以包括:
训练单元,用于采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框;
无锚框检测单元,用于采用RPN损失函数训练得到无锚框目标检测网络。
可选的,该分类回归模块200,可以包括:
卷积处理单元,用于采用卷积网络根据初始候选框对待检测图像进行检测处理,得到卷积分类结果、卷积回归结果;其中,卷积网络由3个残差模块和2个非局部卷积模块交叉连接获得;
全连接处理单元,用于采用全连接网络根据初始候选框对待检测图像进行检测处理,得到全连接分类结果和全连接回归结果。
本申请实施例还提供一种服务器,包括:
存储器,用于存储计算机程序;
处理器,用于执行所述计算机程序时实现如上所述的目标检测方法的步骤。
本申请实施例还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如上所述的目标检测方法的步骤。
该计算机可读存储介质可以包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
说明书中各个实施例采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似部分互相参见即可。对于实施例公开的装置而言,由于其与实施例公开的方法相对应,所以描述的比较简单,相关之处参见方法部分说明即可。
专业人员还可以进一步意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各示例的组成及步骤。这些功能究竟以硬件还是软件方 式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
结合本文中所公开的实施例描述的方法或算法的步骤可以直接用硬件、处理器执行的软件模块,或者二者的结合来实施。软件模块可以置于随机存储器(RAM)、内存、只读存储器(ROM)、电可编程ROM、电可擦除可编程ROM、寄存器、硬盘、可移动磁盘、CD-ROM、或技术领域内所公知的任意其它形式的存储介质中。
以上对本申请所提供的一种图像数据的目标检测方法、目标检测装置、服务器以及计算机可读存储介质进行了详细介绍。本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想。应当指出,对于本技术领域的普通技术人员来说,在不脱离本申请原理的前提下,还可以对本申请进行若干改进和修饰,这些改进和修饰也落入本申请权利要求的保护范围内。

Claims (10)

  1. 一种图像数据的目标检测方法,其特征在于,包括:
    采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框;
    分别采用卷积网络和全连接网络根据所述初始候选框对所述待检测图像进行检测处理,分别得到卷积分类结果、卷积回归结果、全连接分类结果和全连接回归结果;
    通过得分函数对所述卷积分类结果、所述卷积回归结果、所述全连接分类结果和所述全连接回归结果进行筛选,得到分类结果和回归结果。
  2. 根据权利要求1所述的目标检测方法,其特征在于,采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框,包括:
    采用所述无锚框目标检测网络对所述待检测图像进行处理,得到所述初始候选框;其中,所述无锚框目标检测网络为采用RPN损失函数训练得到的网络。
  3. 根据权利要求1所述的目标检测方法,其特征在于,采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框,包括:
    将中心点损失引入RPN损失中,得到中心点RPN损失;
    采用所述无锚框目标检测网络对所述待检测图像进行处理,得到所述初始候选框;其中,所述无锚框目标检测网络为采用所述中心点RPN损失训练得到的网络。
  4. 根据权利要求1所述的目标检测方法,其特征在于,分别采用卷积网络和全连接网络根据所述初始候选框对所述待检测图像进行检测处理,分别得到卷积分类结果、卷积回归结果、全连接分类结果和全连接回归结果,包括:
    采用所述卷积网络根据所述初始候选框对所述待检测图像进行检测处理,得到所述卷积分类结果、所述卷积回归结果;其中,卷积网络由3个残差模块和2个非局部卷积模块交叉连接获得;
    采用所述全连接网络根据所述初始候选框对所述待检测图像进行检测处理,得到所述全连接分类结果和所述全连接回归结果。
  5. 根据权利要求1所述的目标检测方法,其特征在于,还包括:
    采用卷积损失根据训练数据进行训练,得到所述卷积网络;
    采用全连接损失根据训练数据进行训练,得到所述全连接网络。
  6. 根据权利要求1所述的目标检测方法,其特征在于,通过得分函数对所述卷积分类结果、所述卷积回归结果、所述全连接分类结果和所述全连接回归结果进行筛选,得到分类结果和回归结果,包括:
    根据得分函数对所述卷积分类结果、所述卷积回归结果、所述全连接分类结果和所述全连接回归结果进行计算,得到分别得到所述卷积分类结果的得分、所述卷积回归结果的得分、所述全连接分类结果的得分和所述全连接回归结果的得分;
    根据预设得分标准对所述卷积分类结果的得分、所述卷积回归结果的得分、所述全连接分类结果的得分和所述全连接回归结果的得分进行检查,将符合所述预设得分标准的结果作为分类结果和回归结果。
  7. 一种图像数据的目标检测装置,其特征在于,包括:
    无锚框处理模块,用于采用无锚框目标检测网络对待检测图像进行处理,得到初始候选框;
    分类回归模块,用于分别采用卷积网络和全连接网络根据所述初始候选框对所述待检测图像进行检测处理,分别得到卷积分类结果、卷积回归结果、全连接分类结果和全连接回归结果;
    结果筛选模块,用于根据预设得分函数对所述卷积分类结果、所述卷积回归结果、所述全连接分类结果和所述全连接回归结果进行筛选,得到分类结果和回归结果。
  8. 根据权利要求7所述的目标检测装置,其特征在于,所述无锚框处理模块,包括:
    训练单元,用于采用所述无锚框目标检测网络对所述待检测图像进行处理,得到所述初始候选框;
    无锚框检测单元,用于采用RPN损失函数训练得到所述无锚框目标检测网络。
  9. 一种服务器,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述计算机程序时实现如权利要求1至6任一项所述的目标检测方法的步骤。
  10. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至6任一项所述的目标检测方法的步骤。
PCT/CN2020/098445 2020-02-20 2020-06-28 一种图像数据的目标检测方法及相关装置 WO2021164168A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010106107.X 2020-02-20
CN202010106107.XA CN111339891A (zh) 2020-02-20 2020-02-20 一种图像数据的目标检测方法及相关装置

Publications (1)

Publication Number Publication Date
WO2021164168A1 true WO2021164168A1 (zh) 2021-08-26

Family

ID=71185559

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/098445 WO2021164168A1 (zh) 2020-02-20 2020-06-28 一种图像数据的目标检测方法及相关装置

Country Status (2)

Country Link
CN (1) CN111339891A (zh)
WO (1) WO2021164168A1 (zh)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989558A (zh) * 2021-10-28 2022-01-28 哈尔滨工业大学 基于迁移学习与边界框调节的弱监督目标检测方法
CN114066900A (zh) * 2021-11-12 2022-02-18 北京百度网讯科技有限公司 图像分割方法、装置、电子设备和存储介质
CN114648685A (zh) * 2022-03-23 2022-06-21 成都臻识科技发展有限公司 一种anchor-free算法转换为anchor-based算法的方法及系统
CN115017540A (zh) * 2022-05-24 2022-09-06 贵州大学 一种轻量级隐私保护目标检测方法和系统
CN115901789A (zh) * 2022-12-28 2023-04-04 东华大学 基于机器视觉的布匹瑕疵检测系统
CN116079749A (zh) * 2023-04-10 2023-05-09 南京师范大学 基于聚类分离条件随机场的机器人视觉避障方法及机器人
CN116883393A (zh) * 2023-09-05 2023-10-13 青岛理工大学 一种基于无锚框目标检测算法的金属表面缺陷检测方法

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339891A (zh) * 2020-02-20 2020-06-26 苏州浪潮智能科技有限公司 一种图像数据的目标检测方法及相关装置
CN112001448A (zh) * 2020-08-26 2020-11-27 大连信维科技有限公司 一种形状规则小物体检测方法
CN113160144B (zh) * 2021-03-25 2023-05-26 平安科技(深圳)有限公司 目标物检测方法、装置、电子设备及存储介质
CN116385952B (zh) * 2023-06-01 2023-09-01 华雁智能科技(集团)股份有限公司 配网线路小目标缺陷检测方法、装置、设备及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170169315A1 (en) * 2015-12-15 2017-06-15 Sighthound, Inc. Deeply learned convolutional neural networks (cnns) for object localization and classification
CN110633731A (zh) * 2019-08-13 2019-12-31 杭州电子科技大学 一种基于交错感知卷积的单阶段无锚框目标检测方法
CN111339891A (zh) * 2020-02-20 2020-06-26 苏州浪潮智能科技有限公司 一种图像数据的目标检测方法及相关装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10565496B2 (en) * 2016-02-04 2020-02-18 Nec Corporation Distance metric learning with N-pair loss

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170169315A1 (en) * 2015-12-15 2017-06-15 Sighthound, Inc. Deeply learned convolutional neural networks (cnns) for object localization and classification
CN110633731A (zh) * 2019-08-13 2019-12-31 杭州电子科技大学 一种基于交错感知卷积的单阶段无锚框目标检测方法
CN111339891A (zh) * 2020-02-20 2020-06-26 苏州浪潮智能科技有限公司 一种图像数据的目标检测方法及相关装置

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
TIAN ZHI; SHEN CHUNHUA; CHEN HAO; HE TONG: "FCOS: Fully Convolutional One-Stage Object Detection", 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 27 October 2019 (2019-10-27), pages 9626 - 9635, XP033723920, DOI: 10.1109/ICCV.2019.00972 *
YUE WU; YINPENG CHEN; LU YUAN; ZICHENG LIU; LIJUAN WANG; HONGZHI LI; YUN FU: "Rethinking Classification and Localization for Object Detection", ARXIV.ORG, 13 April 2019 (2019-04-13), pages 1 - 13, XP081548244 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113989558A (zh) * 2021-10-28 2022-01-28 哈尔滨工业大学 基于迁移学习与边界框调节的弱监督目标检测方法
CN113989558B (zh) * 2021-10-28 2024-04-30 哈尔滨工业大学 基于迁移学习与边界框调节的弱监督目标检测方法
CN114066900A (zh) * 2021-11-12 2022-02-18 北京百度网讯科技有限公司 图像分割方法、装置、电子设备和存储介质
CN114648685A (zh) * 2022-03-23 2022-06-21 成都臻识科技发展有限公司 一种anchor-free算法转换为anchor-based算法的方法及系统
CN115017540A (zh) * 2022-05-24 2022-09-06 贵州大学 一种轻量级隐私保护目标检测方法和系统
CN115901789A (zh) * 2022-12-28 2023-04-04 东华大学 基于机器视觉的布匹瑕疵检测系统
CN116079749A (zh) * 2023-04-10 2023-05-09 南京师范大学 基于聚类分离条件随机场的机器人视觉避障方法及机器人
CN116883393A (zh) * 2023-09-05 2023-10-13 青岛理工大学 一种基于无锚框目标检测算法的金属表面缺陷检测方法
CN116883393B (zh) * 2023-09-05 2023-12-01 青岛理工大学 一种基于无锚框目标检测算法的金属表面缺陷检测方法

Also Published As

Publication number Publication date
CN111339891A (zh) 2020-06-26

Similar Documents

Publication Publication Date Title
WO2021164168A1 (zh) 一种图像数据的目标检测方法及相关装置
CN110533084B (zh) 一种基于自注意力机制的多尺度目标检测方法
WO2020221298A1 (zh) 文本检测模型训练方法、文本区域、内容确定方法和装置
KR102114357B1 (ko) 풀링 타입에 대한 정보를 포함하는 테이블을 작성하기 위한 방법, 장치 및 이를 이용한 테스팅 방법, 테스팅 장치
CN109613002B (zh) 一种玻璃缺陷检测方法、装置和存储介质
JP2024509411A (ja) 欠陥検出方法、装置及びシステム
CN110223292A (zh) 图像评估方法、装置及计算机可读存储介质
CN111860587B (zh) 一种用于图片小目标的检测方法
CN112819748B (zh) 一种带钢表面缺陷识别模型的训练方法及装置
CN105184225B (zh) 一种多国纸币图像识别方法和装置
CN115331245B (zh) 一种基于图像实例分割的表格结构识别方法
CN111209907A (zh) 一种复杂光污染环境下产品特征图像人工智能识别方法
CN110399873A (zh) 身份证图像获取方法、装置、电子设备及存储介质
CN116539619B (zh) 产品缺陷检测方法、系统、装置及存储介质
CN110599453A (zh) 一种基于图像融合的面板缺陷检测方法、装置及设备终端
TW202127371A (zh) 基於圖像的瑕疵檢測方法及電腦可讀存儲介質
CN114663380A (zh) 一种铝材表面缺陷检测方法、存储介质及计算机系统
CN111598175A (zh) 一种基于在线难例挖掘方式的检测器训练优化方法
CN113343755A (zh) 红细胞图像中的红细胞分类系统及方法
WO2022121164A1 (zh) 封停敏感词预测方法、装置、计算机设备及存储介质
CN111222534A (zh) 一种基于双向特征融合和更平衡l1损失的单发多框检测器优化方法
CN111860265B (zh) 一种基于样本损失的多检测框损失均衡道路场景理解算法
CN111797685B (zh) 表格结构的识别方法及装置
CN116958052A (zh) 一种基于yolo和注意力机制的印刷电路板缺陷检测方法
CN111340000A (zh) 一种针对pdf文档表格提取优化方法及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20920250

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20920250

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20920250

Country of ref document: EP

Kind code of ref document: A1