WO2021085784A1 - Learning method of object detection model, and object detection device in which object detection model is executed - Google Patents
Learning method of object detection model, and object detection device in which object detection model is executed Download PDFInfo
- Publication number
- WO2021085784A1 WO2021085784A1 PCT/KR2020/007403 KR2020007403W WO2021085784A1 WO 2021085784 A1 WO2021085784 A1 WO 2021085784A1 KR 2020007403 W KR2020007403 W KR 2020007403W WO 2021085784 A1 WO2021085784 A1 WO 2021085784A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- object detection
- cells
- detection model
- box
- information
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 193
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000006870 function Effects 0.000 claims description 5
- 238000012549 training Methods 0.000 claims description 5
- 238000010586 diagram Methods 0.000 description 10
- 230000004927 fusion Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 238000007670 refining Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000004308 accommodation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 239000000969 carrier Substances 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000007636 ensemble learning method Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
- G06N3/0455—Auto-encoder networks; Encoder-decoder networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/449—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
- G06V10/451—Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
- G06V10/454—Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
Definitions
- the present invention relates to a learning method of an object detection model and an object detection apparatus in which the object detection model is executed, and more specifically, to an efficient learning method of an object detection model through simplified feature fusion based on deconvolution. .
- the conventional object detection method provides an anchor-based object detection method, a cell-based object detection method, and an anchorless-based object detection method.
- anchors are placed in each cell constituting the feature map of the object detection network, and objectness, class score, and object location are based on the placed anchors.
- shape can be learned.
- This anchor-based object detection method requires object classification and region estimation in proportion to the number of anchors, resulting in a large amount of computation, and in particular, a problem in that the difference in performance increases depending on how anchors are set.
- the cell-based object detection method may divide an image into a plurality of cells, and predict the existence of an object and a class probability for each cell through predetermined bounding boxes.
- the shape of the bounding boxes is dependent on learning, so in the case of a new type of bounding box due to occlusion, size change, view change, etc., there is a problem in that the accuracy of object detection is poor.
- the anchorless-based object detection method converts the ground truth (GT) region based on the input image into a region based on the feature map size for object detection, and then within a certain standard from the center of the converted GT region. You can learn the domain.
- GT ground truth
- the present invention provides a method and apparatus for calculating a learning loss (Loss) of an object detection model that is efficient for changing the size or shape of an object in a deep learning feature map and occlusion by providing a new type of distance-based GT-cell encoding method. .
- Loss learning loss
- a plurality of deconvolution layers designed according to characteristics of different objects are arranged in parallel at the rear ends of a plurality of convolution layers constituting an object detection model,
- a method and apparatus for securing the effectiveness of annotation and refining cost of training data by sequentially iterative learning of layers and deconvolutional layers are provided.
- a method of learning an object detection model includes the steps of dividing a target image to be trained into a plurality of cells having a predetermined size; Generating encoding feature maps having different resolutions corresponding to each of the plurality of convolution layers from the target image through a plurality of convolution layers included in the object detection module; A low-resolution encoding feature map and the first convolution generated by a first convolution layer located at the last end of the plurality of convolution layers through a single deconvolution layer included in the object detection module.
- the object information predicted in each of the plurality of cells corresponds to objectness information, class information of the predicted object, and a region in which the predicted object is expected to exist in each of the plurality of cells. It may include at least one of bounding box information.
- the method may further include determining a final detection area of an object by removing an area of the overlapping bounding box from among the object information predicted in each of the plurality of cells.
- the method may further include calculating a learning loss (Loss) of the object detection model using distance information between each of the plurality of cells and a preset ground truth (GT) box.
- Loss learning loss
- GT ground truth
- the calculating of the learning loss of the object detection model may include performing matching between each of the plurality of cells and the GT box based on a distance between each of the plurality of cells and the GT box; And calculating a loss function using object information predicted from a cell in which matching with the GT box is performed and object information corresponding to the GT box, thereby calculating a learning loss of the object detection model.
- the step of performing the matching between each of the plurality of cells and the GT box includes the cell size of each of the plurality of cells and the relative distance reference ratio to the size of the GT box, the minimum distance reference ratio, and the maximum distance reference ratio. Determining a distance between a center point of each of the plurality of cells and a center point of the GT box; And matching cells whose determined distance is less than or equal to a preset reference to the GT box.
- the method may further include adjusting a parameter value of the object detection model to minimize the learning loss of the calculated object detection model.
- An object detection apparatus on which an object detection model is executed according to an embodiment of the present invention includes a processor, and the object detection model is an encoding feature map having different resolutions for a target image divided into a plurality of cells having a predetermined size.
- a plurality of convolution layers for generating a A low-resolution encoding feature map generated by a first convolutional layer positioned at the last of the plurality of convolutional layers and a high-resolution encoding feature generated by a second convolutional layer positioned before the first convolutional layer
- an object detection layer for detecting object information predicted in each of the plurality of cells using the generated decoding feature map, wherein the single deconvolution layer is a convolution for increasing a receptive field. Layers can be added.
- the object information predicted in each of the plurality of cells corresponds to objectness information, class information of the predicted object, and a region in which the predicted object is expected to exist in each of the plurality of cells. It may include at least one of bounding box information.
- the processor may determine a final object detection area by removing an area of an overlapping bounding box among object information of each of the plurality of cells predicted through the object detection layer.
- the processor may calculate a learning loss (Loss) of the object detection model by using distance information between each of the plurality of cells and a preset ground truth (GT) box.
- Loss learning loss
- GT ground truth
- the processor performs matching between each of the plurality of cells and the GT box based on a distance between each of the plurality of cells and the GT box, and predicts object information and the GT box in a cell in which the matching with the GT box is performed.
- the learning loss of the object detection model can be calculated by calculating a loss function using object information corresponding to the box.
- the processor uses a cell size of each of the plurality of cells and a relative distance reference ratio to the size of the GT box, a minimum distance reference ratio, and a maximum distance reference ratio between a center point of each of the plurality of cells and a center point of the GT box.
- a distance may be determined, and cells having the determined distance equal to or less than a preset reference may be matched to the GT box.
- the processor may adjust a parameter value of the object detection model to minimize the learning loss of the calculated object detection model.
- a plurality of deconvolution layers designed according to characteristics of different objects may be arranged in parallel at the rear ends of the plurality of convolution layers.
- the processor may sequentially and repeatedly learn the plurality of convolutional layers and the plurality of deconvolutional layers using training data corresponding to each of the plurality of deconvolutional layers arranged in parallel.
- the present invention provides a new type of distance-based GT-cell encoding method, so that it is possible to calculate the learning loss of an object detection model that is effective for changing the size or shape of an object and occlusion in a deep learning feature map.
- the present invention arranges a plurality of deconvolution layers designed according to characteristics of different objects in parallel at the rear end of a plurality of convolution layers constituting an object detection model, so that respective convolution layers and deconvolution layers are arranged.
- the present invention can improve the performance of an object detection model by providing a structure of a modified deconvolution layer capable of increasing a receptive field and capacity in an object detection model.
- FIG. 1 is a diagram showing an object detection system according to an embodiment of the present invention.
- FIG. 2 is a diagram illustrating a structure diagram of an object detection model based on deconvolution feature fusion according to an embodiment of the present invention.
- FIG. 3 is a diagram illustrating a multi-task based object detection apparatus according to an embodiment of the present invention.
- FIG. 4 is a diagram illustrating a method of calculating a learning loss (Loss) of an object detection model according to an embodiment of the present invention.
- FIG. 5 is a flowchart illustrating an object detection method according to an embodiment of the present invention.
- FIG. 1 is a diagram showing an object detection system according to an embodiment of the present invention.
- the object detection device 110 constituting the object detection system 100 of the present invention uses the deep learning-based object detection model 111 to exist in the target image 120.
- Various objects can be detected.
- the object detection device 110 identifies the location information and class information of the detected objects, and then provides a result image 130 displaying the location information and class information of the identified objects through a separate display (not shown) to the user. Can provide.
- the object detection model 111 of the present invention may be configured based on a Fully Convolutional Network (FCN).
- FCN Fully Convolutional Network
- the FCN-based object detection model uses upsampling and a method of combining convolutional features of several latter layers for dense prediction on a pixel-by-pixel basis.
- This FCN-based object detection model reduces the resolution through several stages of convolution and pooling, and uses a method of restoring the reduced resolution through upsampling, so the details of the target image 120 Object detection may have inaccurate results due to information disappearing or excessive smoothing effect.
- the present invention adds a symmetrical deconvolution network to a convolution network constituting the FCN-based object detection model 111 so that the upsampled target image 120 ) Resolution problem can be solved.
- the object detection model 111 of the present invention is defined as an object detection model based on deconvolution feature fusion.
- FIG. 2 is a diagram illustrating a structure diagram of an object detection model based on deconvolution feature fusion according to an embodiment of the present invention.
- the conventional deep learning-based object detection model uses anchor and multi-scale fusion information to detect objects that are robust to shape change and size change of various objects.
- the complexity may increase as the number of anchors increases and the number of multi-scale fusion information increases. For this reason, a conventional deep learning-based object detection model requires a large number of learning parameters, and as the number of necessary learning parameters increases, the amount of computation increases.
- the object detection model 111 provided by the present invention can provide a method of reducing the amount of computation by using a smaller number of learning parameters through a simplified feature fusion method without needing anchors.
- the object detection model 111 of the present invention includes (1) a plurality of convolution layers that generate encoding feature maps having different resolutions for a target image divided into a plurality of cells having a constant size.
- S 210 (2) by a low-resolution encoding feature map generated by a first convolution layer located at the last of the plurality of convolution layers and a second convolution layer located before the first convolution layer.
- the object detection model 111 executes encoding feature maps having different resolutions through a plurality of convolutional layers 210 when a target image 120 divided into a plurality of cells having a constant size is input. I can. Referring to FIG. 2, when a target image 120 of 1248x384x3 is input into the object detection model 111, encoding feature maps having different resolutions through a plurality of convolutional layers 210 corresponding to Conv_1 to Conv_5 It can be seen that (ex, Conv_5: 2048, Conv_4: 1024) are created.
- the object detection model 111 provides a ReseNet50 model as a type of encoder corresponding to a plurality of convolutional layers 210, but is not limited thereto, and various models such as VGG, XceptionNet, ResnetXT, and SuffleNet Can be applied.
- the object detection model 111 includes encoding feature maps generated by the latter convolution layers among the encoding feature maps generated by the plurality of convolution layers 210 through a single deconvolution layer 220.
- a decoding feature map can be generated. Referring to FIG. 2, it can be seen that encoding feature maps generated by Conv_5 and Conv_4 corresponding to the second convolution layer among the plurality of convolution layers 210 are input to the deconvolution layer 220.
- the encoding feature map of Conv_5 may have a lower resolution than the encoding feature map of Conv_4.
- the single deconvolution layer 220 can convert the low-resolution encoding feature map to high resolution by sequentially performing 1x1 convolution, upsampling, and 3x3 convolution on the low-resolution encoding feature map.
- the 3x3 convolution can be viewed as an element that determines a receptive field when generating a decoding feature map for object detection.
- the object detection model 111 of the present invention increases the receiving area by adding a 3x3 convolution layer 221 to the deconvolution layer 220 as shown in FIG. 2 in order to consider a wider receiving area, and the deconvolution layer By increasing the depth of (220), a method of increasing the expressive power and capacity of an object detection model is provided.
- the single deconvolution layer 220 concatenates the low-resolution encoding feature map converted to high resolution and the high-resolution encoding feature map input from the convolution layers 210, and then performs 1x1 convolution.
- a decoding feature map having high-resolution fusion features can be generated. Referring to FIG. 2, it can be seen that a decoding feature map of 78x24x512 for a 512 channel is generated by using encoding feature maps corresponding to Conv_5 and Conv_4 in the deconvolution layer 220.
- the object detection layer 230 may generate an object detection feature map including object information predicted from each of a plurality of cells constituting the target image 120 by using the decoding feature map.
- a 78x24x8 object detection feature map is generated by applying a 1x1x8 convolution to a decoding feature map in the object detection layer 230.
- the object detection feature map includes information about the presence or absence of an object predicted in each of the plurality of cells (1 channel), class information of the predicted object (3 channels), and bounding box information corresponding to an area where the predicted object is expected to exist ( 4 channels).
- the class information of the object is limited to 3 channels in FIG. 2, the number of channels of the class information may be changed according to the type of detectable object.
- the object detection apparatus 110 may determine a final detection area of an object by removing an area of the bounding box overlapping with the object detection feature map generated through the object detection layer 230. For example, referring to FIG. 2, the object detection device 110 may determine a final object detection area in the target image 120 by applying Non-Maximum Suppression (NMS) to the generated object detection feature map. .
- NMS Non-Maximum Suppression
- FIG. 3 is a diagram illustrating a multi-task based object detection apparatus according to an embodiment of the present invention.
- the basic structure of the multi-task model can be largely classified into structures of hard parameter sharing and soft parameter sharing.
- the hard parameter sharing structure is a structure that obtains common features from a sharing layer, converts the acquired common features into features of each task domain, and then performs each task result. This can reduce the number of parameters by minimizing redundant operations between tasks, thereby reducing computational complexity.
- the multi-task model can learn a lot of diversified data according to the learning of several tasks during model training, effectively preventing overfitting of the model, and improving the performance of the learning model by increasing the feature expression of the sharing layer. I can make it.
- the object detection apparatus 110 of the present invention applies a multi-task structure to an object detection model as shown in FIG. 3.
- decoders 320 to 350 each designed according to characteristics/types/purposes may be connected in parallel to a backbone 310 which is a sharing layer of an object detection model.
- the backbone 310 may correspond to the convolutional layers 210 of the object detection model 111 of FIG. 2, and the decoders 320 to 350 may correspond to the deconvolutional layer 220.
- the backbone 310 may be configured with an encoder such as VGG or ResNet, or may be configured with a structure such as Feature Pyramid Network or Atrous Spatial Pyramid Pooling configured based on such an encoder. Further, the decoders 320 to 350 connected to the backbone 310 may be configured in a structure capable of being plug and play (PnP) according to the purpose of use of the user.
- an encoder such as VGG or ResNet
- a structure such as Feature Pyramid Network or Atrous Spatial Pyramid Pooling configured based on such an encoder.
- the decoders 320 to 350 connected to the backbone 310 may be configured in a structure capable of being plug and play (PnP) according to the purpose of use of the user.
- the object detection apparatus 110 having a multi-task structure of the present invention provides the entire object detection model through a sequential learning method of a single object detection model corresponding to the backbone 310 + each of the decoders 320 to 350.
- I can learn.
- the object detection device 110 learns the backbone 310 and the first decoder 320, which are share layers, using the learning data of the first decoder 320, and uses the learning data of the second decoder 330. By using the share layer, the backbone 310 and the second decoder 330 may be learned.
- the object detection apparatus 110 may sequentially repeatedly apply the above-described learning method to the learning data of the third decoder 340 and the learning data of the fourth decoder 350.
- This learning method is kind of similar to the ensemble learning method, and since the backbone 310, which is a share layer, can learn a lot of diversified data, overfitting can be prevented. In addition, this learning method improves the ability to express learning data of various angles as feature data having high correlation, thereby improving the performance of each of the single decoders 320 to 350.
- the object detection apparatus 110 having a multi-task structure of the present invention supplements the learning data of other similar object detection models even if the training data set of the single object detection model designed according to each characteristic/type/purpose is small. Learning can be done so that the problem of lack of learning data can be solved. Accordingly, the object detection apparatus 110 having a multi-task structure of the present invention is designed to annotate and refine learning data according to the existing characteristics/types/purposes, and to design an object detection model according to the characteristics and types of each learning data. There is an advantage of reducing the cost as well as the cost of learning.
- the first decoder 320 may be an object detection model for a CoCo data set
- the second decoder 330 may be an object detection model for a Kitti data set.
- the CoCo data set is a data set that detects 80 types of objects, and is an OpenData set in which several situations are aggregated, and an annotated Ground Truth (GT) is used without predicting the occluded area of the object.
- GT Ground Truth
- the Kitti data set is a data set that detects three objects in a road driving situation, predicts an occluded area of an object, and has an annotated GT. Accordingly, the characteristics and types of the two data sets and the object detection purpose are different, and an object detection model corresponding to the decoder must be separately configured.
- the object detection apparatus 110 having a multi-task structure according to the present invention uses a sequential iterative learning method of separately configured object detection models according to the characteristics/types/purposes of the data set. It is possible to reduce the learning cost as well as the design cost of the object detection model according to the characteristics and types of.
- FIG. 4 is a diagram illustrating a method of calculating a learning loss (Loss) of an object detection model according to an embodiment of the present invention.
- the object detection apparatus 110 of the present invention provides a distance-based GT-Cell encoding method, and information about the existence of an object predicted in one cell without a separate anchor, and the predicted object's class. A method of simultaneously learning information and bounding box information corresponding to a region in which the predicted object is expected to exist may be provided.
- the object detection apparatus 110 may perform matching between the cell and the GT box based on the distance between the cell and the GT box.
- d x and d y represent the distance between the cell and the GT box.
- d x and d y are the distances between the center point of each of the plurality of cells and the center point of the GT box and can be expressed as Equation 1 below.
- S represents the stride size (cell size) of the feature map
- w and h represent the width and height of the GT box.
- a is a relative distance reference ratio
- b is a minimum distance reference ratio
- c is a maximum distance reference ratio, respectively.
- the object detection apparatus 110 may perform matching between the cell and the GT box when Equation 2 below is satisfied.
- the object detection apparatus 110 may not match when the distance between the GT box and the cell becomes more than a certain distance based on the maximum distance criterion, thereby reducing variation in cell selection according to the size of the object. As described above, the object detection apparatus 110 may enable generalized learning by reducing a deviation in a learning degree with respect to a shape change of an object to be detected.
- the object detection apparatus 110 may calculate the learning loss using information on the class and bounding box of the GT only for the matched cell, and the class and bounding box information predicted from the matched cell. Since this distance-based GT-Cell encoding method learns all GTs for one cell instead of k anchors, the number of output channels is C+4. Here, C denotes the number of classes, and 4 denotes region information of the predicted object.
- FIG. 5 is a flowchart illustrating an object detection method according to an embodiment of the present invention.
- the object detection apparatus 100 may divide the target image to be learned into a plurality of cells having a predetermined size.
- the object detection apparatus 100 includes encoding features having different resolutions corresponding to each of the plurality of convolution layers from the target image through a plurality of convolution layers included in the object detection module. You can create a map. In this case, various models such as ReseNet50, VGG, XceptionNet, ResnetXT, and SuffleNet may be applied as encoders corresponding to the plurality of convolutional layers.
- the object detection device 100 is generated by the first convolution layer located at the last end of the plurality of convolution layers through a single deconvolution layer included in the object detection module.
- the decoded feature map may be generated by fusing the low-resolution encoding feature map and the high-resolution encoding feature map generated by the second convolution layer located at the previous stage of the first convolution.
- a single deconvolution layer can convert low-resolution encoding feature maps to high resolution by sequentially performing 1x1 convolution, upsampling, and 3x3 convolution on low-resolution encoding feature maps.
- the object detection apparatus 110 of the present invention increases the receiving area by adding a 3x3 convolution layer to a single deconvolution layer in order to consider a wider receiving area, and increases the depth of the deconvolution layer to detect an object model. Provides a way to increase the expressive and receptive capacity of the person.
- the object detection device 110 performs 1x1 convolution after concatenating the low-resolution encoding feature map converted to high resolution through a single deconvolution layer and the high-resolution encoding feature map input from the convolution layers. By doing so, it is possible to finally generate a decoding feature map having a high-resolution fusion feature.
- the object detection apparatus 100 may detect object information predicted in each of a plurality of cells by using a decoding feature map through an object detection layer included in the object detection module.
- the object information predicted in each of the plurality of cells includes information about the existence of an object predicted in each of the plurality of cells, information about the class of the predicted object, and a bounding box corresponding to an area in which the predicted object is expected to exist ( bounding box) information.
- the object detection apparatus 100 may determine a final object detection area by removing an area of the overlapping bounding box from among object information predicted from each of the plurality of cells.
- the object detection apparatus 100 may calculate a learning loss (Loss) of the object detection model by using distance information between each of the plurality of cells and a preset GT box. More specifically, the object detection apparatus 100 performs matching between each of the plurality of cells and the GT box based on the distance between each of the plurality of cells and the GT box, and the predicted object information from the cell in which the matching with the GT box is performed.
- the learning loss of the object detection model can be calculated by calculating a loss function using object information corresponding to the GT box.
- the object detection apparatus 100 uses the relative distance reference ratio, minimum distance reference ratio, and maximum distance reference ratio to the cell size of each of the plurality of cells and the size of the GT box.
- the distance between the center points may be determined, and cells having the determined distance equal to or less than a preset reference may be matched to the GT box.
- the object detection apparatus 110 may optimize the performance of the object detection model by adjusting the parameter value of the object detection model so that the learning loss of the calculated object detection model is minimized.
- the method according to the present invention is written as a program that can be executed on a computer and can be implemented in various recording media, such as a magnetic storage medium, an optical reading medium, and a digital storage medium.
- Implementations of the various techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or combinations thereof. Implementations include a data processing device, e.g., a programmable processor, a computer, or a computer program product, i.e. an information carrier, e.g., machine-readable storage, for processing by or controlling the operation of a number of computers. It may be implemented as a computer program tangibly embodied in an apparatus (computer readable medium) or a radio signal. Computer programs such as the above-described computer program(s) may be recorded in any type of programming language, including compiled or interpreted languages, and as a standalone program or in a module, component, subroutine, or computing environment. It can be deployed in any form, including as other units suitable for the use of. A computer program can be deployed to be processed on one computer or multiple computers at one site or to be distributed across multiple sites and interconnected by a communication network.
- a data processing device e.g., a programmable processor,
- processors suitable for processing a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
- the processor will receive instructions and data from read-only memory or random access memory or both.
- Elements of the computer may include at least one processor that executes instructions and one or more memory devices that store instructions and data.
- a computer may include one or more mass storage devices that store data, such as magnetic, magnetic-optical disks, or optical disks, or receive data from or transmit data to them, or both. It can also be combined so as to be.
- Information carriers suitable for embodying computer program instructions and data are, for example, semiconductor memory devices, for example, magnetic media such as hard disks, floppy disks and magnetic tapes, Compact Disk Read Only Memory (CD-ROM). ), Optical Media such as DVD (Digital Video Disk), Magnetic-Optical Media such as Floptical Disk, ROM (Read Only Memory), RAM (RAM) , Random Access Memory), flash memory, EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), and the like.
- the processor and memory may be supplemented by or included in a special purpose logic circuit structure.
- the computer-readable medium may be any available medium that can be accessed by a computer, and may include both a computer storage medium and a transmission medium.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Biodiversity & Conservation Biology (AREA)
- Image Analysis (AREA)
Abstract
Disclosed are a learning method of an object detection model, and an object detection device in which the object detection model is executed. A learning method of an object detection model may comprise the steps of: dividing a target image to be learned into a plurality of cells having a predetermined size; generating encoding feature maps having different resolutions corresponding to a plurality of respective convolution layers included in an object detection module from the target image via the plurality of convolution layers; generating a decoding feature map by converging a low-resolution encoding feature map generated by a first convolution layer located at the last stage of the plurality of convolution layers and a high-resolution encoding feature map generated by a second convolution layer located at a previous stage of the first convolution layer, via a single deconvolution layer included in the object detection module; and detecting predicted object information from each of the plurality of cells by using the generated decoding feature map, via an object detection layer included in the object detection module, wherein a convolution layer for increasing a receptive field is added to the single deconvolution layer.
Description
본 발명은 객체 검출 모델의 학습 방법 및 객체 검출 모델이 실행되는 객체 검출 장치에 관한 것으로 보다 구체적으로는 디콘볼루션(Deconvolution) 기반의 간소화된 특징 융합을 통한 객체 검출 모델의 효율적인 학습 방법에 관한 것이다. The present invention relates to a learning method of an object detection model and an object detection apparatus in which the object detection model is executed, and more specifically, to an efficient learning method of an object detection model through simplified feature fusion based on deconvolution. .
종래의 객체 검출 방법은 앵커(Anchor) 기반 객체 검출 방법, 셀(Cell) 기반 객체 검출 방법 및 앵커리스(Anchorless) 기반 객체 검출 방법을 제공한다. 먼저, 앵커 기반 객체 검출 방법은 객체 검출 네트워크의 특징맵을 구성하는 각각의 셀에 앵커를 배치하고, 배치된 앵커에 기초하여 객체 유무(Objectness), 클래스 점수(Class score), 객체 위치(Object location) 및 형상(Shape)을 학습할 수 있다. 이러한 앵커 기반 객체 검출 방법은 앵커의 수에 비례하여 객체 분류 및 영역 추정을 수행해야 하므로 연산량이 많아지며, 특히 앵커를 어떻게 설정하느냐에 따라 성능의 차이가 커지는 문제가 발생한다.The conventional object detection method provides an anchor-based object detection method, a cell-based object detection method, and an anchorless-based object detection method. First, in the anchor-based object detection method, anchors are placed in each cell constituting the feature map of the object detection network, and objectness, class score, and object location are based on the placed anchors. ) And shape can be learned. This anchor-based object detection method requires object classification and region estimation in proportion to the number of anchors, resulting in a large amount of computation, and in particular, a problem in that the difference in performance increases depending on how anchors are set.
셀 기반 객체 검출 방법은 영상을 복수의 셀들로 분할하고, 미리 설정된 바운딩 박스(Bounding box)들을 통해 각각의 셀에 대한 객체 유무 및 클래스 확률을 예측할 수 있다. 이러한 셀 기반 객체 검출 방법은 바운딩 박스들의 형태가 학습에 의존적이어서, 폐색(Occlusion), 크기 변화, 시점(View) 변화 등으로 인한 새로운 형태의 바운딩 박스의 경우 객체 검출의 정확도가 떨어지는 문제가 있다.The cell-based object detection method may divide an image into a plurality of cells, and predict the existence of an object and a class probability for each cell through predetermined bounding boxes. In such a cell-based object detection method, the shape of the bounding boxes is dependent on learning, so in the case of a new type of bounding box due to occlusion, size change, view change, etc., there is a problem in that the accuracy of object detection is poor.
마지막으로 앵커리스 기반 객체 검출 방법은 입력 영상 기준의 그라운드 트루스(Ground Truth, 이하 GT) 영역을 객체 검출을 위한 특징맵 크기 기준의 영역으로 변환한 후, 변환된 GT 영역의 중심으로부터 일정 기준 이내의 영역을 학습할 수 있다. 이러한 앵커리스 기반 객체 검출 방법은 객체의 크기에 따라 학습 로스(Loss)를 계산한 영역이 달라지기 때문에 객체 크기에 따른 학습 편차가 생기는 문제가 발생한다. Finally, the anchorless-based object detection method converts the ground truth (GT) region based on the input image into a region based on the feature map size for object detection, and then within a certain standard from the center of the converted GT region. You can learn the domain. In such an anchorless-based object detection method, since the area in which the learning loss is calculated varies according to the size of the object, there is a problem that a learning deviation occurs according to the size of the object.
따라서, 이러한 각각의 객체 검출 방법에서 발생되는 문제를 극복할 수 있는 새로운 형태의 객체 검출 방법이 요구되고 있다.Accordingly, there is a need for a new type of object detection method capable of overcoming the problems occurring in each of these object detection methods.
본 발명은 새로운 유형의 거리 기반 GT-셀 인코딩 방법을 제공함으로써 딥 러닝 특징맵에서 객체의 크기 또는 모양 변화 및 폐색 등에 효율적인 객체 검출 모델의 학습 손실(Loss)을 계산할 수 있는 방법 및 장치를 제공한다. The present invention provides a method and apparatus for calculating a learning loss (Loss) of an object detection model that is efficient for changing the size or shape of an object in a deep learning feature map and occlusion by providing a new type of distance-based GT-cell encoding method. .
또한, 본 발명은 객체 검출 모델을 구성하는 복수의 콘볼루션 레이어(Convolution layer)들의 후단에 서로 다른 객체의 특성에 따라 설계된 복수의 디콘볼루션 레이어(Deconvolution layer)들을 병렬로 배치하여 각각의 콘볼루션 레이어들 및 디콘볼루션 레이어들을 순차적으로 반복 학습 시킴으로써 학습 데이터의 주석(Annotation) 및 정제 비용에 대한 효용성을 확보할 수 있는 방법 및 장치를 제공한다. In addition, in the present invention, a plurality of deconvolution layers designed according to characteristics of different objects are arranged in parallel at the rear ends of a plurality of convolution layers constituting an object detection model, A method and apparatus for securing the effectiveness of annotation and refining cost of training data by sequentially iterative learning of layers and deconvolutional layers are provided.
본 발명의 일실시예에 따른 객체 검출 모델의 학습 방법은 학습하고자 하는 타겟 이미지를 일정한 크기를 가진 복수의 셀들로 분할하는 단계; 상기 객체 검출 모듈에 포함된 복수의 콘볼루션 레이어(Convolution layer)들을 통해 상기 타겟 이미지로부터 상기 복수의 콘볼루션 레이어들 각각에 대응하는 서로 다른 해상도를 가진 인코딩 특징맵을 생성하는 단계; 상기 객체 검출 모듈에 포함된 단일의 디콘볼루션 레이어(Deconvolution layer)를 통해 상기 복수의 콘볼루션 레이어들 중 최후단에 위치한 제1 콘볼루션 레이어에 의해 생성된 저해상도의 인코딩 특징맵과 상기 제1 콘볼루션의 이전단에 위치한 제2 콘볼루션 레이어에 의해 생성된 고해상도의 인코딩 특징맵을 융합함으로써 디코딩 특징맵을 생성하는 단계; 및 상기 객체 검출 모듈에 포함된 객체 검출 레이어를 통해 상기 생성된 디코딩 특징맵을 이용하여 상기 복수의 셀들 각각에서 예측된 객체 정보를 검출하는 단계를 포함하고, 상기 단일의 디콘볼루션 레이어는 수용 영역(Receptive field)을 증가시키기 위한 콘볼루션 레이어가 추가될 수 있다.A method of learning an object detection model according to an embodiment of the present invention includes the steps of dividing a target image to be trained into a plurality of cells having a predetermined size; Generating encoding feature maps having different resolutions corresponding to each of the plurality of convolution layers from the target image through a plurality of convolution layers included in the object detection module; A low-resolution encoding feature map and the first convolution generated by a first convolution layer located at the last end of the plurality of convolution layers through a single deconvolution layer included in the object detection module. Generating a decoding feature map by fusing the high-resolution encoding feature maps generated by the second convolution layer located at the previous end of the solution; And detecting object information predicted in each of the plurality of cells by using the generated decoding feature map through an object detection layer included in the object detection module, wherein the single deconvolution layer is an accommodation region A convolution layer may be added to increase the (Receptive field).
상기 복수의 셀들 각각에서 예측된 객체 정보는 상기 복수의 셀들 각각에서 예측된 객체의 유무(Objectness) 정보, 상기 예측된 객체의 클래스(Class) 정보 및 상기 예측된 객체가 존재할 것으로 예상되는 영역에 대응하는 바운딩 박스(bounding box) 정보 중 적어도 하나를 포함할 수 있다.The object information predicted in each of the plurality of cells corresponds to objectness information, class information of the predicted object, and a region in which the predicted object is expected to exist in each of the plurality of cells. It may include at least one of bounding box information.
상기 복수의 셀들 각각에서 예측된 객체 정보 중 중복되는 바운딩 박스의 영역을 제거함으로써 최종적인 객체의 검출 영역을 결정하는 단계를 더 포함할 수 있다.The method may further include determining a final detection area of an object by removing an area of the overlapping bounding box from among the object information predicted in each of the plurality of cells.
상기 복수의 셀들 각각과 미리 설정된 그라운드 트루스(Ground Truth, 이하 GT) 박스 간 거리 정보를 이용하여 상기 객체 검출 모델의 학습 손실(Loss)을 계산하는 단계를 더 포함할 수 있다.The method may further include calculating a learning loss (Loss) of the object detection model using distance information between each of the plurality of cells and a preset ground truth (GT) box.
상기 객체 검출 모델의 학습 손실을 계산하는 단계는 상기 복수의 셀들 각각과 상기 GT 박스 간 거리에 기초하여 상기 복수의 셀들 각각과 상기 GT 박스 간 매칭을 수행하는 단계; 및 상기 GT 박스와의 매칭이 수행된 셀에서 예측된 객체 정보와 상기 GT 박스에 대응하는 객체 정보를 이용하여 손실 함수(Loss function)를 계산함으로써 상기 객체 검출 모델의 학습 손실을 계산할 수 있다.The calculating of the learning loss of the object detection model may include performing matching between each of the plurality of cells and the GT box based on a distance between each of the plurality of cells and the GT box; And calculating a loss function using object information predicted from a cell in which matching with the GT box is performed and object information corresponding to the GT box, thereby calculating a learning loss of the object detection model.
상기 복수의 셀들 각각과 상기 GT 박스 간 매칭을 수행하는 단계는 상기 복수의 셀들 각각의 셀 크기 및 상기 GT 박스의 크기에 대한 상대적 거리 기준 비율, 최소 거리 기준 비율 및 최대 거리 기준 비율을 이용하여 상기 복수의 셀들 각각의 중심점과 상기 GT 박스의 중심점 간 거리를 결정하는 단계; 및 상기 결정된 거리가 미리 설정된 기준 이하인 셀들을 상기 GT 박스에 매칭하는 단계를 포함할 수 있다.The step of performing the matching between each of the plurality of cells and the GT box includes the cell size of each of the plurality of cells and the relative distance reference ratio to the size of the GT box, the minimum distance reference ratio, and the maximum distance reference ratio. Determining a distance between a center point of each of the plurality of cells and a center point of the GT box; And matching cells whose determined distance is less than or equal to a preset reference to the GT box.
상기 계산된 객체 검출 모델의 학습 손실이 최소가 되도록 상기 객체 검출 모델의 매개 변수 값을 조절하는 단계를 더 포함할 수 있다.The method may further include adjusting a parameter value of the object detection model to minimize the learning loss of the calculated object detection model.
본 발명의 일실시예에 따른 객체 검출 모델이 실행되는 객체 검출 장치는 프로세서를 포함하고, 상기 객체 검출 모델은 일정한 크기를 가진 복수의 셀들로 분할된 타겟 이미지에 대해 서로 다른 해상도를 가진 인코딩 특징맵을 생성하는 복수의 콘볼루션 레이어(Convolution layer)들; 상기 복수의 콘볼루션 레이어들 중 최후단에 위치한 제1 콘볼루션 레이어에 의해 생성된 저해상도의 인코딩 특징맵과 상기 제1 콘볼루션 레이어의 이전에 위치한 제2 콘볼루션 레이어에 의해 생성된 고해상도의 인코딩 특징맵을 융합함으로써 디코딩 특징맵을 생성하는 단일의 디콘볼루션 레이어(Deconvolution layer); 및 상기 생성된 디코딩 특징맵을 이용하여 상기 복수의 셀들 각각에서 예측된 객체 정보를 검출하는 객체 검출 레이어를 포함하고, 상기 단일의 디콘볼루션 레이어는 수용 영역(Receptive field)을 증가시키기 위한 콘볼루션 레이어가 추가될 수 있다.An object detection apparatus on which an object detection model is executed according to an embodiment of the present invention includes a processor, and the object detection model is an encoding feature map having different resolutions for a target image divided into a plurality of cells having a predetermined size. A plurality of convolution layers for generating a; A low-resolution encoding feature map generated by a first convolutional layer positioned at the last of the plurality of convolutional layers and a high-resolution encoding feature generated by a second convolutional layer positioned before the first convolutional layer A single deconvolution layer for generating a decoding feature map by fusing the maps; And an object detection layer for detecting object information predicted in each of the plurality of cells using the generated decoding feature map, wherein the single deconvolution layer is a convolution for increasing a receptive field. Layers can be added.
상기 복수의 셀들 각각에서 예측된 객체 정보는 상기 복수의 셀들 각각에서 예측된 객체의 유무(Objectness) 정보, 상기 예측된 객체의 클래스(Class) 정보 및 상기 예측된 객체가 존재할 것으로 예상되는 영역에 대응하는 바운딩 박스(bounding box) 정보 중 적어도 하나를 포함할 수 있다.The object information predicted in each of the plurality of cells corresponds to objectness information, class information of the predicted object, and a region in which the predicted object is expected to exist in each of the plurality of cells. It may include at least one of bounding box information.
상기 프로세서는 상기 객체 검출 레이어를 통해 예측된 상기 복수의 셀들 각각의 객체 정보 중 중복되는 바운딩 박스의 영역을 제거함으로써 최종적인 객체의 검출 영역을 결정할 수 있다.The processor may determine a final object detection area by removing an area of an overlapping bounding box among object information of each of the plurality of cells predicted through the object detection layer.
상기 프로세서는 상기 복수의 셀들 각각과 미리 설정된 그라운드 트루스(Ground Truth, 이하 GT) 박스 간 거리 정보를 이용하여 상기 객체 검출 모델의 학습 손실(Loss)을 계산할 수 있다.The processor may calculate a learning loss (Loss) of the object detection model by using distance information between each of the plurality of cells and a preset ground truth (GT) box.
상기 프로세서는 상기 복수의 셀들 각각과 상기 GT 박스 간 거리에 기초하여 상기 복수의 셀들 각각과 상기 GT 박스 간 매칭을 수행하고, 상기 GT 박스와의 매칭이 수행된 셀에서 예측된 객체 정보와 상기 GT 박스에 대응하는 객체 정보를 이용하여 손실 함수(Loss function)를 계산함으로써 상기 객체 검출 모델의 학습 손실을 계산할 수 있다.The processor performs matching between each of the plurality of cells and the GT box based on a distance between each of the plurality of cells and the GT box, and predicts object information and the GT box in a cell in which the matching with the GT box is performed. The learning loss of the object detection model can be calculated by calculating a loss function using object information corresponding to the box.
상기 프로세서는 상기 복수의 셀들 각각의 셀 크기 및 상기 GT 박스의 크기에 대한 상대적 거리 기준 비율, 최소 거리 기준 비율 및 최대 거리 기준 비율을 이용하여 상기 복수의 셀들 각각의 중심점과 상기 GT 박스의 중심점 간 거리를 결정하고, 상기 결정된 거리가 미리 설정된 기준 이하인 셀들을 상기 GT 박스에 매칭할 수 있다.The processor uses a cell size of each of the plurality of cells and a relative distance reference ratio to the size of the GT box, a minimum distance reference ratio, and a maximum distance reference ratio between a center point of each of the plurality of cells and a center point of the GT box. A distance may be determined, and cells having the determined distance equal to or less than a preset reference may be matched to the GT box.
상기 프로세서는 상기 계산된 객체 검출 모델의 학습 손실이 최소가 되도록 상기 객체 검출 모델의 매개 변수 값을 조절할 수 있다.The processor may adjust a parameter value of the object detection model to minimize the learning loss of the calculated object detection model.
상기 객체 검출 모델은 상기 복수의 콘볼루션 레이어들의 후단에 서로 다른 객체의 특성에 따라 설계된 복수의 디콘볼루션 레이어들이 병렬로 배치될 수 있다.In the object detection model, a plurality of deconvolution layers designed according to characteristics of different objects may be arranged in parallel at the rear ends of the plurality of convolution layers.
상기 프로세서는 상기 병렬로 배치된 복수의 디콘볼루션 레이어들 각각에 대응하는 학습 데이터를 이용하여 상기 복수의 콘볼루션 레이어들 및 상기 복수의 디콘볼루션 레이어들을 순차적으로 반복 학습할 수 있다.The processor may sequentially and repeatedly learn the plurality of convolutional layers and the plurality of deconvolutional layers using training data corresponding to each of the plurality of deconvolutional layers arranged in parallel.
본 발명은 새로운 유형의 거리 기반 GT-셀 인코딩 방법을 제공함으로써 딥 러닝 특징맵에서 객체의 크기 또는 모양 변화 및 폐색 등에 효율적인 객체 검출 모델의 학습 손실을 계산할 수 있다. The present invention provides a new type of distance-based GT-cell encoding method, so that it is possible to calculate the learning loss of an object detection model that is effective for changing the size or shape of an object and occlusion in a deep learning feature map.
또한, 본 발명은 객체 검출 모델을 구성하는 복수의 콘볼루션 레이어들의 후단에 서로 다른 객체들의 특성에 따라 설계된 복수의 디콘볼루션 레이어들을 병렬로 배치하여 각각의 콘볼루션 레이어들 및 디콘볼루션 레이어들을 순차적으로 반복 학습 시킴으로써 학습 데이터의 주석(Annotation) 및 정제 비용에 대한 효용성을 확보할 수 있다. In addition, the present invention arranges a plurality of deconvolution layers designed according to characteristics of different objects in parallel at the rear end of a plurality of convolution layers constituting an object detection model, so that respective convolution layers and deconvolution layers are arranged. By sequentially iterative learning, it is possible to secure the effectiveness of the cost of annotation and refining of the learning data.
또한, 본 발명은 객체 검출 모델에서 수용 영역(Receptive Field)과 수용력을 증가시킬 수 있는 수정된 디콘볼루션 레이어의 구조를 제공함으로써 객체 검출 모델의 성능을 개선시킬 수 있다.In addition, the present invention can improve the performance of an object detection model by providing a structure of a modified deconvolution layer capable of increasing a receptive field and capacity in an object detection model.
도 1은 본 발명의 일실시예에 따른 객체 검출 시스템을 도시한 도면이다.1 is a diagram showing an object detection system according to an embodiment of the present invention.
도 2는 본 발명의 일실시예에 따른 디콘볼루션 특징 융합 기반의 객체 검출 모델의 구조도를 도시한 도면이다.2 is a diagram illustrating a structure diagram of an object detection model based on deconvolution feature fusion according to an embodiment of the present invention.
도 3은 본 발명의 일실시예에 따른 멀티 태스크(Multi-Task) 기반의 객체 검출 장치를 도시한 도면이다.3 is a diagram illustrating a multi-task based object detection apparatus according to an embodiment of the present invention.
도 4는 본 발명의 일실시예에 따른 객체 검출 모델의 학습 손실(Loss) 계산 방법을 도시한 도면이다.4 is a diagram illustrating a method of calculating a learning loss (Loss) of an object detection model according to an embodiment of the present invention.
도 5는 본 발명의 일실시예에 따른 객체 검출 방법을 플로우챠트로 도시한 도면이다.5 is a flowchart illustrating an object detection method according to an embodiment of the present invention.
이하, 본 발명의 실시예를 첨부된 도면을 참조하여 상세하게 설명한다. Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
도 1은 본 발명의 일실시예에 따른 객체 검출 시스템을 도시한 도면이다.1 is a diagram showing an object detection system according to an embodiment of the present invention.
본 발명의 객체 검출 시스템(100)을 구성하는 객체 검출 장치(110)는 타겟 이미지(120)가 수신된 경우, 딥 러닝 기반의 객체 검출 모델(111)을 이용하여 타겟 이미지(120) 내에 존재하는 다양한 객체들을 검출할 수 있다. 그리고, 객체 검출 장치(110)는 검출된 객체들의 위치 정보 및 클래스 정보를 식별한 후 별도의 디스플레이(미도시)를 통해 식별된 객체들의 위치 정보와 클래스 정보가 표시된 결과 이미지(130)를 사용자에게 제공할 수 있다.When the target image 120 is received, the object detection device 110 constituting the object detection system 100 of the present invention uses the deep learning-based object detection model 111 to exist in the target image 120. Various objects can be detected. In addition, the object detection device 110 identifies the location information and class information of the detected objects, and then provides a result image 130 displaying the location information and class information of the identified objects through a separate display (not shown) to the user. Can provide.
일례로, 본 발명의 객체 검출 모델(111)은 Fully Convolutional Network(FCN) 기반으로 구성될 수 있다. FCN 기반의 객체 검출 모델은 픽셀 단위의 조밀한 예측을 위하여 업샘플링(Upsampling)과 여러 개의 후반부 레이어에 대한 콘볼루션 특징(Convolution feature)를 합치는 방식을 사용한다. 이러한 FCN 기반의 객체 검출 모델은 여러 단의 콘볼루션 및 풀링(Pooling)을 거치면서 해상도가 줄어들고, 줄어든 해상도를 다시 업샘플링을 통해 복원하는 방식을 사용하기 때문에 타겟 이미지(120)의 세부(Detail) 정보가 사라지거나 과도한 스무딩(Smoothing) 효과에 의해 객체 검출이 정밀하지 못한 결과를 가질 수 있다.For example, the object detection model 111 of the present invention may be configured based on a Fully Convolutional Network (FCN). The FCN-based object detection model uses upsampling and a method of combining convolutional features of several latter layers for dense prediction on a pixel-by-pixel basis. This FCN-based object detection model reduces the resolution through several stages of convolution and pooling, and uses a method of restoring the reduced resolution through upsampling, so the details of the target image 120 Object detection may have inaccurate results due to information disappearing or excessive smoothing effect.
이러한 문제를 해결하기 위하여 본 발명은 FCN 기반의 객체 검출 모델(111)을 구성하는 콘볼루션 네트워크(Convolution network)에 대칭이 되는 디콘볼루션 네트워크(Deconvolution network)를 추가함으로써 업샘플링된 타겟 이미지(120)의 해상도 문제를 해결할 수 있다. 이하에서는 본 발명의 객체 검출 모델(111)을 디콘볼루션 특징 융합 기반의 객체 검출 모델로 정의하도록 한다.In order to solve this problem, the present invention adds a symmetrical deconvolution network to a convolution network constituting the FCN-based object detection model 111 so that the upsampled target image 120 ) Resolution problem can be solved. Hereinafter, the object detection model 111 of the present invention is defined as an object detection model based on deconvolution feature fusion.
보다 자세한 객체 검출 모델(111)의 동작은 이후의 도면을 통해 자세히 설명하도록 한다.A more detailed operation of the object detection model 111 will be described in detail with reference to the following drawings.
도 2는 본 발명의 일실시예에 따른 디콘볼루션 특징 융합 기반의 객체 검출 모델의 구조도를 도시한 도면이다.2 is a diagram illustrating a structure diagram of an object detection model based on deconvolution feature fusion according to an embodiment of the present invention.
종래의 딥러닝 기반의 객체 검출 모델은 다양한 객체들의 모양 변화 및 크기 변화에 강건한 객체 검출을 위해서 앵커와 멀티 스케일(Multi-Scale) 융합 정보를 이용하였다. 그러나 이러한 종래의 딥러닝 기반의 객체 검출 모델은 앵커의 수가 늘어날수록, 멀티 스케일 융합 정보가 많아질수록 복잡도가 증가할 수 있다. 이로 인해 종래의 딥러닝 기반의 객체 검출 모델은 많은 학습 파라미터가 필요하고, 필요한 학습 파라미터가 많아질수록 연산량도 증가하는 문제가 발생한다.The conventional deep learning-based object detection model uses anchor and multi-scale fusion information to detect objects that are robust to shape change and size change of various objects. However, in such a conventional deep learning-based object detection model, the complexity may increase as the number of anchors increases and the number of multi-scale fusion information increases. For this reason, a conventional deep learning-based object detection model requires a large number of learning parameters, and as the number of necessary learning parameters increases, the amount of computation increases.
본 발명에서 제공하는 객체 검출 모델(111)은 앵커들이 필요 없으면서도 단순화된 특징 융합 방법을 통해 보다 적은 수의 학습 파라미터를 이용함으로써 연산량을 감소시키는 방법을 제공할 수 있다.The object detection model 111 provided by the present invention can provide a method of reducing the amount of computation by using a smaller number of learning parameters through a simplified feature fusion method without needing anchors.
보다 구체적으로 본 발명의 객체 검출 모델(111)은 (1) 일정한 크기를 가진 복수의 셀들로 분할된 타겟 이미지에 대해 서로 다른 해상도를 가진 인코딩 특징맵을 생성하는 복수의 콘볼루션 레이어(Convolution layer)들(210), (2) 복수의 콘볼루션 레이어들 중 최후단에 위치한 제1 콘볼루션 레이어에 의해 생성된 저해상도의 인코딩 특징맵과 제1 콘볼루션 레이어의 이전에 위치한 제2 콘볼루션 레이어에 의해 생성된 고해상도의 인코딩 특징맵을 융합함으로써 디코딩 특징맵을 생성하는 단일의 디콘볼루션 레이어(Deconvolution layer)(220) 및 (3) 생성된 디코딩 특징맵을 이용하여 복수의 셀들 각각에서 예측된 객체 정보를 검출하는 객체 검출 레이어(230)로 구성될 수 있다.More specifically, the object detection model 111 of the present invention includes (1) a plurality of convolution layers that generate encoding feature maps having different resolutions for a target image divided into a plurality of cells having a constant size. S 210, (2) by a low-resolution encoding feature map generated by a first convolution layer located at the last of the plurality of convolution layers and a second convolution layer located before the first convolution layer. Object information predicted in each of a plurality of cells using a single deconvolution layer 220 that generates a decoding feature map by fusing the generated high-resolution encoding feature map and (3) the generated decoding feature map It may be composed of an object detection layer 230 that detects.
일실시예에 따른 객체 검출 장치(110)에 의해 실행되는 객체 검출 모델(111)의 동작은 다음과 같다. 먼저, 객체 검출 모델(111)은 일정한 크기를 가진 복수의 셀들로 분할된 타겟 이미지(120)가 입력되면, 복수의 콘볼루션 레이어들(210)을 통해 서로 다른 해상도를 가진 인코딩 특징맵을 생성할 수 있다. 도 2를 참고하면, 1248x384x3의 타겟 이미지(120)가 객체 검출 모델(111)에 입력되면, Conv_1 내지 Conv_5에 해당하는 복수의 콘볼루션 레이어들(210)을 통해 서로 다른 해상도를 가진 인코딩 특징맵들(ex, Conv_5 : 2048개, Conv_4 : 1024개)이 생성되는 것을 알 수 있다. The operation of the object detection model 111 executed by the object detection device 110 according to an embodiment is as follows. First, the object detection model 111 generates encoding feature maps having different resolutions through a plurality of convolutional layers 210 when a target image 120 divided into a plurality of cells having a constant size is input. I can. Referring to FIG. 2, when a target image 120 of 1248x384x3 is input into the object detection model 111, encoding feature maps having different resolutions through a plurality of convolutional layers 210 corresponding to Conv_1 to Conv_5 It can be seen that (ex, Conv_5: 2048, Conv_4: 1024) are created.
이때, 도 2에서, 객체 검출 모델(111)은 복수의 콘볼루션 레이어들(210)에 대응하는 인코더의 종류로 ReseNet50 모델을 제공하고 있으나 이에 한정되지 않고, VGG, XceptionNet, ResnetXT, SuffleNet 등 다양한 모델이 적용될 수 있다.In this case, in FIG. 2, the object detection model 111 provides a ReseNet50 model as a type of encoder corresponding to a plurality of convolutional layers 210, but is not limited thereto, and various models such as VGG, XceptionNet, ResnetXT, and SuffleNet Can be applied.
이후 객체 검출 모델(111)은 단일의 디콘볼루션 레이어(220)를 통해 복수의 콘볼루션 레이어들(210)에 의해 생성된 인코딩 특징맵들 중 후반부 콘볼루션 레이어들에 의해 생성된 인코딩 특징맵들을 융합함으로써 디코딩 특징맵을 생성할 수 있다. 도 2를 참고하면, 복수의 콘볼루션 레이어들(210) 중 후반부 콘볼루션 레이어에 대응하는 Conv_5와 Conv_4에 의해 생성된 인코딩 특징맵들이 디콘볼루션 레이어(220)에 입력되는 것을 확인할 수 있다. 이때, Conv_5가 Conv_4 보다 후단에 있으므로, Conv_5의 인코딩 특징맵이 Conv_4의 인코딩 특징맵 보다 해상도가 낮을 수 있다.Thereafter, the object detection model 111 includes encoding feature maps generated by the latter convolution layers among the encoding feature maps generated by the plurality of convolution layers 210 through a single deconvolution layer 220. By fusing, a decoding feature map can be generated. Referring to FIG. 2, it can be seen that encoding feature maps generated by Conv_5 and Conv_4 corresponding to the second convolution layer among the plurality of convolution layers 210 are input to the deconvolution layer 220. At this time, since Conv_5 is at a later stage than Conv_4, the encoding feature map of Conv_5 may have a lower resolution than the encoding feature map of Conv_4.
구체적으로 단일의 디콘볼루션 레이어(220)는 저해상도의 인코딩 특징맵에 대해 순차적으로 1x1 콘볼루션, 업샘플링 및 3x3 콘볼루션을 수행함으로써 저해상도의 인코딩 특징맵을 고해상도를 변환할 수 있다. 여기서 3x3 콘볼루션은 객체 검출을 위한 디코딩 특징맵을 생성할 때, 수용 영역(Receptive field)을 결정하는 요소로 볼 수 있다. 본 발명의 객체 검출 모델(111)은 보다 넓은 수용 영역을 고려하기 위하여 도 2와 같이 디콘볼루션 레이어(220)에 3x3 콘볼루션 레이어(221)를 추가함으로써 수용 영역을 증가시키고, 디콘볼루션 레이어(220)의 깊이를 증가시켜 객체 검출 모델의 표현력과 수용력을 증가시키는 방법을 제공한다.Specifically, the single deconvolution layer 220 can convert the low-resolution encoding feature map to high resolution by sequentially performing 1x1 convolution, upsampling, and 3x3 convolution on the low-resolution encoding feature map. Here, the 3x3 convolution can be viewed as an element that determines a receptive field when generating a decoding feature map for object detection. The object detection model 111 of the present invention increases the receiving area by adding a 3x3 convolution layer 221 to the deconvolution layer 220 as shown in FIG. 2 in order to consider a wider receiving area, and the deconvolution layer By increasing the depth of (220), a method of increasing the expressive power and capacity of an object detection model is provided.
이후 단일의 디콘볼루션 레이어(220)는 고해상도로 변환된 저해상도의 인코딩 특징맵과 콘볼루션 레이어들(210)에서 입력된 고해상도의 인코딩 특징맵을 결합(Concatenation) 한 후 1x1 콘볼루션을 수행함으로써 최종적으로 고해상도의 융합 특징을 가지는 디코딩 특징맵을 생성할 수 있다. 도 2를 참고하면, 디콘볼루션 레이어(220)에서 Conv_5 및 Conv_4에 대응하는 인코딩 특징맵들을 이용하여 512 채널(Channel)에 대한 78x24x512의 디코딩 특징맵이 생성되는 것을 확인할 수 있다.Thereafter, the single deconvolution layer 220 concatenates the low-resolution encoding feature map converted to high resolution and the high-resolution encoding feature map input from the convolution layers 210, and then performs 1x1 convolution. As a result, a decoding feature map having high-resolution fusion features can be generated. Referring to FIG. 2, it can be seen that a decoding feature map of 78x24x512 for a 512 channel is generated by using encoding feature maps corresponding to Conv_5 and Conv_4 in the deconvolution layer 220.
이후 객체 검출 레이어(230)는 디코딩 특징맵을 이용하여 타겟 이미지(120)를 구성하는 복수의 셀들 각각에서 예측된 객체 정보를 포함하는 객체 검출 특징맵을 생성할 수 있다. 일례로, 도 2를 참고하면, 객체 검출 레이어(230)에서 디코딩 특징맵에 1x1x8 콘볼루션이 적용됨으로써 78x24x8의 객체 검출 특징맵이 생성되는 것을 확인할 수 있다. 이때, 객체 검출 특징맵은 복수의 셀들 각각에서 예측된 객체의 유무 정보(1 채널), 예측된 객체의 클래스 정보(3 채널) 및 예측된 객체가 존재할 것으로 예상되는 영역에 대응하는 바운딩 박스 정보(4 채널) 중 적어도 하나의 정보를 포함할 수 있다. 이때, 도 2에서 객체의 클래스 정보가 3채널로 한정되어 있으나, 검출 가능한 객체의 종류에 따라 클래스 정보의 채널 수도 변경될 수 있다.Thereafter, the object detection layer 230 may generate an object detection feature map including object information predicted from each of a plurality of cells constituting the target image 120 by using the decoding feature map. For example, referring to FIG. 2, it can be seen that a 78x24x8 object detection feature map is generated by applying a 1x1x8 convolution to a decoding feature map in the object detection layer 230. In this case, the object detection feature map includes information about the presence or absence of an object predicted in each of the plurality of cells (1 channel), class information of the predicted object (3 channels), and bounding box information corresponding to an area where the predicted object is expected to exist ( 4 channels). In this case, although the class information of the object is limited to 3 channels in FIG. 2, the number of channels of the class information may be changed according to the type of detectable object.
마지막으로 객체 검출 장치(110)는 객체 검출 레이어(230)를 통해 생성된 객체 검출 특징맵에 대해 중복되는 바운딩 박스의 영역을 제거함으로써 최종적인 객체의 검출 영역을 결정할 수 있다. 일례로, 도 2를 참고하면, 객체 검출 장치(110)는 생성된 객체 검출 특징맵에 대해 Non-Maximum Suppression(NMS)을 적용함으로써 타겟 이미지(120)에서 최종적인 객체의 검출 영역을 결정할 수 있다. Finally, the object detection apparatus 110 may determine a final detection area of an object by removing an area of the bounding box overlapping with the object detection feature map generated through the object detection layer 230. For example, referring to FIG. 2, the object detection device 110 may determine a final object detection area in the target image 120 by applying Non-Maximum Suppression (NMS) to the generated object detection feature map. .
도 3은 본 발명의 일실시예에 따른 멀티 태스크(Multi-Task) 기반의 객체 검출 장치를 도시한 도면이다.3 is a diagram illustrating a multi-task based object detection apparatus according to an embodiment of the present invention.
멀티 태스크 모델의 기본 구조는 크게 하드 파라미터 쉐어링(Hard Parameter sharing)과 소프트 파라미터 쉐어링(Soft parameter sharing)의 구조로 분류 할 수 있다. 하드 파라미터 쉐어링 구조는 소프트 파라미터 쉐어링 구조와 달리 쉐어링 레이어(Sharing Layer)로부터 공통 특징을 획득하고, 획득된 공통 특징을 각 태스크 도메인의 특징으로 변환한 후 각 태스크 결과를 수행하는 구조이다. 이는 태스크 간의 중복 연산을 최소화 하여 파라미터 수를 줄이고, 이를 통한 연산 복잡성을 감소시킬 수 있다. 또한, 멀티 태스크 모델은 모델 학습 시 여러 태스크의 학습에 따른 다각화된 데이터를 많이 학습 시킬 수 있어 효율적으로 모델의 오버피팅(overffting) 방지 하고, 쉐어링 레이어의 특징 표현력을 증가시켜 학습 모델의 성능을 향상 시킬 수 있다.The basic structure of the multi-task model can be largely classified into structures of hard parameter sharing and soft parameter sharing. Unlike the soft parameter sharing structure, the hard parameter sharing structure is a structure that obtains common features from a sharing layer, converts the acquired common features into features of each task domain, and then performs each task result. This can reduce the number of parameters by minimizing redundant operations between tasks, thereby reducing computational complexity. In addition, the multi-task model can learn a lot of diversified data according to the learning of several tasks during model training, effectively preventing overfitting of the model, and improving the performance of the learning model by increasing the feature expression of the sharing layer. I can make it.
본 발명의 객체 검출 장치(110)는 도 3과 같이 멀티 태스크 구조를 객체 검출 모델에 적용하였다. 우선, 객체 검출 모델의 쉐어링 레이어인 백본(Backbone)(310)에 특성/유형/목적에 따라 각각 설계된 디코더들(320~350)이 병렬로 연결될 수 있다. 이때, 백본(310)은 도 2의 객체 검출 모델(111)의 콘볼루션 레이어들(210)에 대응하고, 디코더들(320~350)은 디콘볼루션 레이어(220)에 대응할 수 있다. The object detection apparatus 110 of the present invention applies a multi-task structure to an object detection model as shown in FIG. 3. First, decoders 320 to 350 each designed according to characteristics/types/purposes may be connected in parallel to a backbone 310 which is a sharing layer of an object detection model. In this case, the backbone 310 may correspond to the convolutional layers 210 of the object detection model 111 of FIG. 2, and the decoders 320 to 350 may correspond to the deconvolutional layer 220.
여기서, 백본(310)은 VGG, ResNet 등의 인코더로 구성되거나, 또는 이런 인코더를 기반으로 구성된 Feature Pyramid Network, Atrous Spatial Pyramid Pooling과 같은 구조로 구성될 수 있다. 그리고, 백본(310)에 연결되는 디코더들(320~350)은 사용자의 사용 목적에 따라 플러그 앤 플레이(Plug and Play, PnP) 될 수 있는 구조로 구성될 수 있다. Here, the backbone 310 may be configured with an encoder such as VGG or ResNet, or may be configured with a structure such as Feature Pyramid Network or Atrous Spatial Pyramid Pooling configured based on such an encoder. Further, the decoders 320 to 350 connected to the backbone 310 may be configured in a structure capable of being plug and play (PnP) according to the purpose of use of the user.
한편, 본 발명의 멀티 태스크 구조를 가지는 객체 검출 장치(110)는 백본(310) + 각각의 디코더들(320~350)에 대응되는 단일 객체 검출 모델의 순차적인 학습 방법을 통해 전체 객체 검출 모델을 학습 할 수 있다. 일례로, 객체 검출 장치(110)는 제1 디코더(320)의 학습 데이터를 이용하여 쉐어 레이어인 백본(310)과 제1 디코더(320)를 학습하고, 제2 디코더(330)의 학습 데이터를 이용하여 쉐어 레이어인 백본(310)와 제2 디코더(330)를 학습할 수 있다. 마찬가지로, 제3 디코더(340)의 학습 데이터와 제4 디코더(350)의 학습 데이터에 대해서도 객체 검출 장치(110)는 상기와 같은 학습 방법을 순차적으로 반복하여 적용할 수 있다.On the other hand, the object detection apparatus 110 having a multi-task structure of the present invention provides the entire object detection model through a sequential learning method of a single object detection model corresponding to the backbone 310 + each of the decoders 320 to 350. I can learn. As an example, the object detection device 110 learns the backbone 310 and the first decoder 320, which are share layers, using the learning data of the first decoder 320, and uses the learning data of the second decoder 330. By using the share layer, the backbone 310 and the second decoder 330 may be learned. Likewise, the object detection apparatus 110 may sequentially repeatedly apply the above-described learning method to the learning data of the third decoder 340 and the learning data of the fourth decoder 350.
이러한 학습 방법은 일종에 앙상블 학습 방법과 유사하여, 쉐어 레이어인 백본(310)가 다각화된 많은 데이터를 학습 할 수 있어 오버피팅을 방지 할 수 있다. 뿐만 아니라, 이러한 학습 방법은 다양한 각도의 학습 데이터를 상관관계(Correlation)가 높은 특징 데이터로 표현 할 수 있는 능력을 향상시켜, 각각의 단일 디코더들(320~350)의 성능을 향상 시킬 수 있다. This learning method is kind of similar to the ensemble learning method, and since the backbone 310, which is a share layer, can learn a lot of diversified data, overfitting can be prevented. In addition, this learning method improves the ability to express learning data of various angles as feature data having high correlation, thereby improving the performance of each of the single decoders 320 to 350.
또한, 본 발명의 멀티 태스크 구조를 가지는 객체 검출 장치(110)는 각각의 특성/유형/목적에 따라 설계된 단일 객체 검출 모델의 학습 데이터 셋이 적더라도, 다른 유사 객체 검출 모델의 학습 데이터를 통해 보충 학습을 할 수 있어 학습 데이터 부족 문제를 해결 할 수 있다. 이에 따라, 본 발명의 멀티 태스크 구조를 가지는 객체 검출 장치(110)는 기존의 특성/유형/목적에 따른 학습 데이터의 주석 및 정제 비용과, 각 학습 데이터의 특성 및 유형에 따른 객체 검출 모델의 설계 비용은 물론 학습 비용을 줄일 수 있는 장점이 있다. In addition, the object detection apparatus 110 having a multi-task structure of the present invention supplements the learning data of other similar object detection models even if the training data set of the single object detection model designed according to each characteristic/type/purpose is small. Learning can be done so that the problem of lack of learning data can be solved. Accordingly, the object detection apparatus 110 having a multi-task structure of the present invention is designed to annotate and refine learning data according to the existing characteristics/types/purposes, and to design an object detection model according to the characteristics and types of each learning data. There is an advantage of reducing the cost as well as the cost of learning.
일례로, 도 3을 참고하면, 제1 디코더(320)는 CoCo 데이터 셋을 위한 객체 검출 모델이고, 제2 디코더(330)는 Kitti 데이터 셋을 위한 객체 검출 모델일 수 있다. 여기서, CoCo 데이터 셋은 80가지의 객체를 검출 하는 데이터 셋으로서 여러 상황이 집합된 OpenData 셋이며, 객체의 폐색(Occlusion)된 영역을 예측하지 않고 주석화된 그라운드 트루스(Ground Truth, 이하 GT)를 가지고 있다. 반면에, Kitti 데이터 셋은 도로 주행 상황에서 3가지 객체를 검출 하는 데이터 셋으로써, 객체의 폐색된 영역을 예측하고, 주석화된 GT를 가지고 있다. 이에 따라, 두 가지 데이터 셋의 특성 및 유형과 이에 대한 객체 검출 목적이 달라, 디코더에 대응하는 객체 검출 모델을 별도로 구성해야 한다.For example, referring to FIG. 3, the first decoder 320 may be an object detection model for a CoCo data set, and the second decoder 330 may be an object detection model for a Kitti data set. Here, the CoCo data set is a data set that detects 80 types of objects, and is an OpenData set in which several situations are aggregated, and an annotated Ground Truth (GT) is used without predicting the occluded area of the object. Have. On the other hand, the Kitti data set is a data set that detects three objects in a road driving situation, predicts an occluded area of an object, and has an annotated GT. Accordingly, the characteristics and types of the two data sets and the object detection purpose are different, and an object detection model corresponding to the decoder must be separately configured.
본 발명의 멀티 태스크 구조를 가지는 객체 검출 장치(110)는 데이터 셋의 특성/유형/목적에 따라 별도로 구성된 객체 검출 모델들의 순차적인 반복 학습 방법을 통해 학습 데이터의 주석 및 정제 비용과, 각 학습 데이터의 특성 및 유형에 따른 객체 검출 모델의 설계 비용은 물론 학습 비용을 줄일 수 있다. The object detection apparatus 110 having a multi-task structure according to the present invention uses a sequential iterative learning method of separately configured object detection models according to the characteristics/types/purposes of the data set. It is possible to reduce the learning cost as well as the design cost of the object detection model according to the characteristics and types of.
도 4는 본 발명의 일실시예에 따른 객체 검출 모델의 학습 손실(Loss) 계산 방법을 도시한 도면이다.4 is a diagram illustrating a method of calculating a learning loss (Loss) of an object detection model according to an embodiment of the present invention.
종래의 객체 검출 방법은 하나의 셀에 복수의 앵커를 두어 크기 및 모양 별로 나누어 학습을 수행하고, 수행된 결과에 대한 학습 손실을 계산하였다. 이와는 달리 본 발명의 객체 검출 장치(110)는 거리 기반의 GT-Cell 인코딩 방법을 제공하며, 별도의 앵커 없이 하나의 셀에서 예측된 객체의 유무(Objectness) 정보, 예측된 객체의 클래스(Class) 정보 및 예측된 객체가 존재할 것으로 예상되는 영역에 대응하는 바운딩 박스(bounding box) 정보를 동시에 학습하는 방법을 제공할 수 있다. In the conventional object detection method, a plurality of anchors are placed in one cell, and learning is performed by dividing by size and shape, and a learning loss is calculated for the performed result. In contrast, the object detection apparatus 110 of the present invention provides a distance-based GT-Cell encoding method, and information about the existence of an object predicted in one cell without a separate anchor, and the predicted object's class. A method of simultaneously learning information and bounding box information corresponding to a region in which the predicted object is expected to exist may be provided.
먼저, 객체 검출 장치(110)는 셀과 GT 박스 사이의 거리를 기준으로 셀과 GT 박스 간 매칭을 수행할 수 있다. 도 4를 참고하면, d
x, d
y는 셀과 GT 박스 사이의 거리를 나타낸다. 여기서 d
x, d
y는 복수의 셀들 각각의 중심점과 GT 박스의 중심점 간 거리에 관한 것으로 아래의 식 1과 같이 표현할 수 있다.First, the object detection apparatus 110 may perform matching between the cell and the GT box based on the distance between the cell and the GT box. Referring to FIG. 4, d x and d y represent the distance between the cell and the GT box. Here, d x and d y are the distances between the center point of each of the plurality of cells and the center point of the GT box and can be expressed as Equation 1 below.
<식 1><Equation 1>
d
x, d
y
= box_center - cell_center (image plane)
d x , d y = box_center-cell_center (image plane)
그리고, S는 해당 특징맵의 스트라이드(stride) 크기(cell 크기)를 나타내고,
w와
h는 GT 박스의 폭과 높이를 나타낸다. a는 상대적 거리 기준 비율,
b는 최소 거리 기준 비율,
c는 최대 거리 기준 비율을 각각 나타낸다.In addition, S represents the stride size (cell size) of the feature map, and w and h represent the width and height of the GT box. a is a relative distance reference ratio, b is a minimum distance reference ratio, and c is a maximum distance reference ratio, respectively.
이때, 객체 검출 장치(110)는 d
x, d
y가 각각 최대 거리 기준
cS보다 작으면서
aw,
ah 보다 작거나, 최소 거리 기준
bS보다 작으면 해당 셀과 GT 박스를 매칭할 수 있다. 즉, 객체 검출 장치(110)는 아래의 식 2를 만족하는 경우, 셀과 GT 박스 간 매칭을 수행할 수 있다.In this case, when d x and d y are smaller than the maximum distance reference cS and smaller than aw and ah , or smaller than the minimum distance reference bS , the corresponding cell and the GT box may be matched. That is, the object detection apparatus 110 may perform matching between the cell and the GT box when Equation 2 below is satisfied.
<식 2><Equation 2>
|d
x| <
min(
max(
aw,
bS),
cS) and |d
y| <
min(
max(
ah,
bS),
cS)|d x | < min ( max ( aw , bS ), cS ) and |d y | < min ( max ( ah , bS ), cS )
객체 검출 장치(110)는 최대 거리 기준을 통해 GT 박스와 셀 간의 거리가 일정 이상 멀어질 경우 매칭하지 않음으로써 객체의 크기에 따른 셀 선택의 편차를 줄일 수 있다. 이와 같이 객체 검출 장치(110)는 검출하고자 하는 객체의 모양 변화에 대한 학습 정도의 편차의 줄임으로써 일반화된 학습이 가능하게 할 수 있다.The object detection apparatus 110 may not match when the distance between the GT box and the cell becomes more than a certain distance based on the maximum distance criterion, thereby reducing variation in cell selection according to the size of the object. As described above, the object detection apparatus 110 may enable generalized learning by reducing a deviation in a learning degree with respect to a shape change of an object to be detected.
그리고, 객체 검출 장치(110)는 매칭된 셀에 대해서만 GT의 클래스 및 바운딩 박스에 대한 정보와, 해당 매칭된 셀에서 예측된 클래스 및 바운딩 박스 정보를 이용하여 학습 손실을 계산할 수 있다. 이와 같은 거리기반 GT-Cell 인코딩 기법은 k개의 앵커가 아닌 1개의 셀에 대해 모든 GT를 학습하기 때문에 출력 채널은 C+4개가 된다. 여기서, C는 클래스 수를 의미하고, 4는 예측된 객체의 영역 정보를 의미한다.In addition, the object detection apparatus 110 may calculate the learning loss using information on the class and bounding box of the GT only for the matched cell, and the class and bounding box information predicted from the matched cell. Since this distance-based GT-Cell encoding method learns all GTs for one cell instead of k anchors, the number of output channels is C+4. Here, C denotes the number of classes, and 4 denotes region information of the predicted object.
도 5는 본 발명의 일실시예에 따른 객체 검출 방법을 플로우챠트로 도시한 도면이다.5 is a flowchart illustrating an object detection method according to an embodiment of the present invention.
단계(510)에서, 객체 검출 장치(100)는 학습하고자 하는 타겟 이미지를 일정한 크기를 가진 복수의 셀들로 분할할 수 있다. In step 510, the object detection apparatus 100 may divide the target image to be learned into a plurality of cells having a predetermined size.
단계(520)에서, 객체 검출 장치(100)는 객체 검출 모듈에 포함된 복수의 콘볼루션 레이어(Convolution layer)들을 통해 타겟 이미지로부터 복수의 콘볼루션 레이어들 각각에 대응하는 서로 다른 해상도를 가진 인코딩 특징맵을 생성할 수 있다. 이때, 복수의 콘볼루션 레이어에 대응하는 인코더는 ReseNet50, VGG, XceptionNet, ResnetXT, SuffleNet 등 다양한 모델이 적용될 수 있다.In step 520, the object detection apparatus 100 includes encoding features having different resolutions corresponding to each of the plurality of convolution layers from the target image through a plurality of convolution layers included in the object detection module. You can create a map. In this case, various models such as ReseNet50, VGG, XceptionNet, ResnetXT, and SuffleNet may be applied as encoders corresponding to the plurality of convolutional layers.
단계(530)에서, 객체 검출 장치(100)는 객체 검출 모듈에 포함된 단일의 디콘볼루션 레이어(Deconvolution layer)를 통해 복수의 콘볼루션 레이어들 중 최후단에 위치한 제1 콘볼루션 레이어에 의해 생성된 저해상도의 인코딩 특징맵과 제1 콘볼루션의 이전단에 위치한 제2 콘볼루션 레이어에 의해 생성된 고해상도의 인코딩 특징맵을 융합함으로써 디코딩 특징맵을 생성할 수 있다. In step 530, the object detection device 100 is generated by the first convolution layer located at the last end of the plurality of convolution layers through a single deconvolution layer included in the object detection module. The decoded feature map may be generated by fusing the low-resolution encoding feature map and the high-resolution encoding feature map generated by the second convolution layer located at the previous stage of the first convolution.
구체적으로 단일의 디콘볼루션 레이어는 저해상도의 인코딩 특징맵에 대해 순차적으로 1x1 콘볼루션, 업샘플링 및 3x3 콘볼루션을 수행함으로써 저해상도의 인코딩 특징맵을 고해상도를 변환할 수 있다. 본 발명의 객체 검출 장치(110)는 보다 넓은 수용 영역을 고려하기 위하여 단일의 디콘볼루션 레이어에 3x3 콘볼루션 레이어를 추가함으로써 수용 영역을 증가시키고, 디콘볼루션 레이어의 깊이를 증가시켜 객체 검출 모델의 표현력과 수용력을 증가시키는 방법을 제공한다.Specifically, a single deconvolution layer can convert low-resolution encoding feature maps to high resolution by sequentially performing 1x1 convolution, upsampling, and 3x3 convolution on low-resolution encoding feature maps. The object detection apparatus 110 of the present invention increases the receiving area by adding a 3x3 convolution layer to a single deconvolution layer in order to consider a wider receiving area, and increases the depth of the deconvolution layer to detect an object model. Provides a way to increase the expressive and receptive capacity of the person.
이후 객체 검출 장치(110)은 단일의 디콘볼루션 레이어를 통해 고해상도로 변환된 저해상도의 인코딩 특징맵과 콘볼루션 레이어들에서 입력된 고해상도의 인코딩 특징맵을 결합(Concatenation) 한 후 1x1 콘볼루션을 수행함으로써 최종적으로 고해상도의 융합 특징을 가지는 디코딩 특징맵을 생성할 수 있다.Thereafter, the object detection device 110 performs 1x1 convolution after concatenating the low-resolution encoding feature map converted to high resolution through a single deconvolution layer and the high-resolution encoding feature map input from the convolution layers. By doing so, it is possible to finally generate a decoding feature map having a high-resolution fusion feature.
단계(540)에서, 객체 검출 장치(100)는 객체 검출 모듈에 포함된 객체 검출 레이어를 통해 디코딩 특징맵을 이용하여 복수의 셀들 각각에서 예측된 객체 정보를 검출할 수 있다. 복수의 셀들 각각에서 예측된 객체 정보는 복수의 셀들 각각에서 예측된 객체의 유무(Objectness) 정보, 예측된 객체의 클래스(Class) 정보 및 예측된 객체가 존재할 것으로 예상되는 영역에 대응하는 바운딩 박스(bounding box) 정보 중 적어도 하나를 포함할 수 있다.In step 540, the object detection apparatus 100 may detect object information predicted in each of a plurality of cells by using a decoding feature map through an object detection layer included in the object detection module. The object information predicted in each of the plurality of cells includes information about the existence of an object predicted in each of the plurality of cells, information about the class of the predicted object, and a bounding box corresponding to an area in which the predicted object is expected to exist ( bounding box) information.
단계(550)에서, 객체 검출 장치(100)는 복수의 셀들 각각에서 예측된 객체 정보 중 중복되는 바운딩 박스의 영역을 제거함으로써 최종적인 객체의 검출 영역을 결정할 수 있다.In operation 550, the object detection apparatus 100 may determine a final object detection area by removing an area of the overlapping bounding box from among object information predicted from each of the plurality of cells.
단계(560)에서, 객체 검출 장치(100)는 복수의 셀들 각각과 미리 설정된 GT 박스 간 거리 정보를 이용하여 객체 검출 모델의 학습 손실(Loss)을 계산할 수 있다. 보다 구체적으로 객체 검출 장치(100)는 복수의 셀들 각각과 GT 박스 간 거리에 기초하여 복수의 셀들 각각과 GT 박스 간 매칭을 수행하고, GT 박스와의 매칭이 수행된 셀에서 예측된 객체 정보와 GT 박스에 대응하는 객체 정보를 이용하여 손실 함수(Loss function)를 계산함으로써 객체 검출 모델의 학습 손실을 계산할 수 있다.In step 560, the object detection apparatus 100 may calculate a learning loss (Loss) of the object detection model by using distance information between each of the plurality of cells and a preset GT box. More specifically, the object detection apparatus 100 performs matching between each of the plurality of cells and the GT box based on the distance between each of the plurality of cells and the GT box, and the predicted object information from the cell in which the matching with the GT box is performed. The learning loss of the object detection model can be calculated by calculating a loss function using object information corresponding to the GT box.
이때, 객체 검출 장치(100)는 복수의 셀들 각각의 셀 크기 및 GT 박스의 크기에 대한 상대적 거리 기준 비율, 최소 거리 기준 비율 및 최대 거리 기준 비율을 이용하여 복수의 셀들 각각의 중심점과 GT 박스의 중심점 간 거리를 결정하고, 결정된 거리가 미리 설정된 기준 이하인 셀들을 GT 박스에 매칭할 수 있다.In this case, the object detection apparatus 100 uses the relative distance reference ratio, minimum distance reference ratio, and maximum distance reference ratio to the cell size of each of the plurality of cells and the size of the GT box. The distance between the center points may be determined, and cells having the determined distance equal to or less than a preset reference may be matched to the GT box.
마지막으로 단계(570)에서, 객체 검출 장치(110)는 계산된 객체 검출 모델의 학습 손실이 최소가 되도록 객체 검출 모델의 매개 변수 값을 조절함으로써 객체 검출 모델의 성능을 최적화할 수 있다.Finally, in step 570, the object detection apparatus 110 may optimize the performance of the object detection model by adjusting the parameter value of the object detection model so that the learning loss of the calculated object detection model is minimized.
한편, 본 발명에 따른 방법은 컴퓨터에서 실행될 수 있는 프로그램으로 작성되어 마그네틱 저장매체, 광학적 판독매체, 디지털 저장매체 등 다양한 기록 매체로도 구현될 수 있다.Meanwhile, the method according to the present invention is written as a program that can be executed on a computer and can be implemented in various recording media, such as a magnetic storage medium, an optical reading medium, and a digital storage medium.
본 명세서에 설명된 각종 기술들의 구현들은 디지털 전자 회로조직으로, 또는 컴퓨터 하드웨어, 펌웨어, 소프트웨어로, 또는 그들의 조합들로 구현될 수 있다. 구현들은 데이터 처리 장치, 예를 들어 프로그램가능 프로세서, 컴퓨터, 또는 다수의 컴퓨터들의 동작에 의한 처리를 위해, 또는 이 동작을 제어하기 위해, 컴퓨터 프로그램 제품, 즉 정보 캐리어, 예를 들어 기계 판독가능 저장 장치(컴퓨터 판독가능 매체) 또는 전파 신호에서 유형적으로 구체화된 컴퓨터 프로그램으로서 구현될 수 있다. 상술한 컴퓨터 프로그램(들)과 같은 컴퓨터 프로그램은 컴파일된 또는 인터프리트된 언어들을 포함하는 임의의 형태의 프로그래밍 언어로 기록될 수 있고, 독립형 프로그램으로서 또는 모듈, 구성요소, 서브루틴, 또는 컴퓨팅 환경에서의 사용에 적절한 다른 유닛으로서 포함하는 임의의 형태로 전개될 수 있다. 컴퓨터 프로그램은 하나의 사이트에서 하나의 컴퓨터 또는 다수의 컴퓨터들 상에서 처리되도록 또는 다수의 사이트들에 걸쳐 분배되고 통신 네트워크에 의해 상호 연결되도록 전개될 수 있다.Implementations of the various techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or combinations thereof. Implementations include a data processing device, e.g., a programmable processor, a computer, or a computer program product, i.e. an information carrier, e.g., machine-readable storage, for processing by or controlling the operation of a number of computers. It may be implemented as a computer program tangibly embodied in an apparatus (computer readable medium) or a radio signal. Computer programs such as the above-described computer program(s) may be recorded in any type of programming language, including compiled or interpreted languages, and as a standalone program or in a module, component, subroutine, or computing environment. It can be deployed in any form, including as other units suitable for the use of. A computer program can be deployed to be processed on one computer or multiple computers at one site or to be distributed across multiple sites and interconnected by a communication network.
컴퓨터 프로그램의 처리에 적절한 프로세서들은 예로서, 범용 및 특수 목적 마이크로프로세서들 둘 다, 및 임의의 종류의 디지털 컴퓨터의 임의의 하나 이상의 프로세서들을 포함한다. 일반적으로, 프로세서는 판독 전용 메모리 또는 랜덤 액세스 메모리 또는 둘 다로부터 명령어들 및 데이터를 수신할 것이다. 컴퓨터의 요소들은 명령어들을 실행하는 적어도 하나의 프로세서 및 명령어들 및 데이터를 저장하는 하나 이상의 메모리 장치들을 포함할 수 있다. 일반적으로, 컴퓨터는 데이터를 저장하는 하나 이상의 대량 저장 장치들, 예를 들어 자기, 자기-광 디스크들, 또는 광 디스크들을 포함할 수 있거나, 이것들로부터 데이터를 수신하거나 이것들에 데이터를 송신하거나 또는 양쪽으로 되도록 결합될 수도 있다. 컴퓨터 프로그램 명령어들 및 데이터를 구체화하는데 적절한 정보 캐리어들은 예로서 반도체 메모리 장치들, 예를 들어, 하드 디스크, 플로피 디스크 및 자기 테이프와 같은 자기 매체(Magnetic Media), CD-ROM(Compact Disk Read Only Memory), DVD(Digital Video Disk)와 같은 광 기록 매체(Optical Media), 플롭티컬 디스크(Floptical Disk)와 같은 자기-광 매체(Magneto-Optical Media), 롬(ROM, Read Only Memory), 램(RAM, Random Access Memory), 플래시 메모리, EPROM(Erasable Programmable ROM), EEPROM(Electrically Erasable Programmable ROM) 등을 포함한다. 프로세서 및 메모리는 특수 목적 논리 회로조직에 의해 보충되거나, 이에 포함될 수 있다.Processors suitable for processing a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. In general, the processor will receive instructions and data from read-only memory or random access memory or both. Elements of the computer may include at least one processor that executes instructions and one or more memory devices that store instructions and data. In general, a computer may include one or more mass storage devices that store data, such as magnetic, magnetic-optical disks, or optical disks, or receive data from or transmit data to them, or both. It can also be combined so as to be. Information carriers suitable for embodying computer program instructions and data are, for example, semiconductor memory devices, for example, magnetic media such as hard disks, floppy disks and magnetic tapes, Compact Disk Read Only Memory (CD-ROM). ), Optical Media such as DVD (Digital Video Disk), Magnetic-Optical Media such as Floptical Disk, ROM (Read Only Memory), RAM (RAM) , Random Access Memory), flash memory, EPROM (Erasable Programmable ROM), EEPROM (Electrically Erasable Programmable ROM), and the like. The processor and memory may be supplemented by or included in a special purpose logic circuit structure.
또한, 컴퓨터 판독가능 매체는 컴퓨터에 의해 액세스될 수 있는 임의의 가용매체일 수 있고, 컴퓨터 저장매체 및 전송매체를 모두 포함할 수 있다.Further, the computer-readable medium may be any available medium that can be accessed by a computer, and may include both a computer storage medium and a transmission medium.
본 명세서는 다수의 특정한 구현물의 세부사항들을 포함하지만, 이들은 어떠한 발명이나 청구 가능한 것의 범위에 대해서도 제한적인 것으로서 이해되어서는 안되며, 오히려 특정한 발명의 특정한 실시형태에 특유할 수 있는 특징들에 대한 설명으로서 이해되어야 한다. 개별적인 실시형태의 문맥에서 본 명세서에 기술된 특정한 특징들은 단일 실시형태에서 조합하여 구현될 수도 있다. 반대로, 단일 실시형태의 문맥에서 기술한 다양한 특징들 역시 개별적으로 혹은 어떠한 적절한 하위 조합으로도 복수의 실시형태에서 구현 가능하다. 나아가, 특징들이 특정한 조합으로 동작하고 초기에 그와 같이 청구된 바와 같이 묘사될 수 있지만, 청구된 조합으로부터의 하나 이상의 특징들은 일부 경우에 그 조합으로부터 배제될 수 있으며, 그 청구된 조합은 하위 조합이나 하위 조합의 변형물로 변경될 수 있다.While this specification includes details of a number of specific implementations, these should not be construed as limiting to the scope of any invention or claimable, but rather as a description of features that may be peculiar to a particular embodiment of a particular invention. It must be understood. Certain features described herein in the context of separate embodiments may be implemented in combination in a single embodiment. Conversely, various features described in the context of a single embodiment can also be implemented in multiple embodiments individually or in any suitable sub-combination. Furthermore, although features operate in a particular combination and may be initially described as so claimed, one or more features from a claimed combination may in some cases be excluded from the combination, and the claimed combination may be a sub-combination. Or sub-combination variations.
마찬가지로, 특정한 순서로 도면에서 동작들을 묘사하고 있지만, 이는 바람직한 결과를 얻기 위하여 도시된 그 특정한 순서나 순차적인 순서대로 그러한 동작들을 수행하여야 한다거나 모든 도시된 동작들이 수행되어야 하는 것으로 이해되어서는 안 된다. 특정한 경우, 멀티태스킹과 병렬 프로세싱이 유리할 수 있다. 또한, 상술한 실시형태의 다양한 장치 컴포넌트의 분리는 그러한 분리를 모든 실시형태에서 요구하는 것으로 이해되어서는 안되며, 설명한 프로그램 컴포넌트와 장치들은 일반적으로 단일의 소프트웨어 제품으로 함께 통합되거나 다중 소프트웨어 제품에 패키징 될 수 있다는 점을 이해하여야 한다.Likewise, although operations are depicted in the drawings in a specific order, it should not be understood that such operations must be performed in that particular order or sequential order shown, or that all illustrated operations must be performed in order to obtain a desired result. In certain cases, multitasking and parallel processing can be advantageous. In addition, separation of the various device components in the above-described embodiments should not be understood as requiring such separation in all embodiments, and the program components and devices described are generally integrated together into a single software product or packaged in multiple software products. It should be understood that you can.
한편, 본 명세서와 도면에 개시된 본 발명의 실시 예들은 이해를 돕기 위해 특정 예를 제시한 것에 지나지 않으며, 본 발명의 범위를 한정하고자 하는 것은 아니다. 여기에 개시된 실시 예들 이외에도 본 발명의 기술적 사상에 바탕을 둔 다른 변형 예들이 실시 가능하다는 것은, 본 발명이 속하는 기술분야에서 통상의 지식을 가진 자에게 자명한 것이다.On the other hand, the embodiments of the present invention disclosed in the specification and drawings are merely presented specific examples to aid understanding, and are not intended to limit the scope of the present invention. It is apparent to those of ordinary skill in the art that other modified examples based on the technical idea of the present invention may be implemented in addition to the embodiments disclosed herein.
Claims (17)
- 객체 검출 모델의 학습 방법에 있어서,In the learning method of the object detection model,학습하고자 하는 타겟 이미지를 일정한 크기를 가진 복수의 셀들로 분할하는 단계;Dividing the target image to be learned into a plurality of cells having a predetermined size;상기 객체 검출 모듈에 포함된 복수의 콘볼루션 레이어(Convolution layer)들을 통해 상기 타겟 이미지로부터 상기 복수의 콘볼루션 레이어들 각각에 대응하는 서로 다른 해상도를 가진 인코딩 특징맵을 생성하는 단계;Generating encoding feature maps having different resolutions corresponding to each of the plurality of convolution layers from the target image through a plurality of convolution layers included in the object detection module;상기 객체 검출 모듈에 포함된 단일의 디콘볼루션 레이어(Deconvolution layer)를 통해 상기 복수의 콘볼루션 레이어들 중 최후단에 위치한 제1 콘볼루션 레이어에 의해 생성된 저해상도의 인코딩 특징맵과 상기 제1 콘볼루션의 이전단에 위치한 제2 콘볼루션 레이어에 의해 생성된 고해상도의 인코딩 특징맵을 융합함으로써 디코딩 특징맵을 생성하는 단계; 및A low-resolution encoding feature map and the first convolution generated by a first convolution layer located at the last end of the plurality of convolution layers through a single deconvolution layer included in the object detection module. Generating a decoding feature map by fusing the high-resolution encoding feature maps generated by the second convolution layer located at the previous end of the solution; And상기 객체 검출 모듈에 포함된 객체 검출 레이어를 통해 상기 생성된 디코딩 특징맵을 이용하여 상기 복수의 셀들 각각에서 예측된 객체 정보를 검출하는 단계Detecting object information predicted in each of the plurality of cells using the generated decoding feature map through an object detection layer included in the object detection module를 포함하고,Including,상기 단일의 디콘볼루션 레이어는,The single deconvolution layer,수용 영역(Receptive field)을 증가시키기 위한 콘볼루션 레이어가 추가되는 객체 검출 모델의 학습 방법.An object detection model learning method in which a convolutional layer is added to increase a receptive field.
- 제1항에 있어서,The method of claim 1,상기 복수의 셀들 각각에서 예측된 객체 정보는,The object information predicted in each of the plurality of cells,상기 복수의 셀들 각각에서 예측된 객체의 유무(Objectness) 정보, 상기 예측된 객체의 클래스(Class) 정보 및 상기 예측된 객체가 존재할 것으로 예상되는 영역에 대응하는 바운딩 박스(bounding box) 정보 중 적어도 하나를 포함하는 객체 검출 모델의 학습 방법.At least one of information about the existence of an object predicted in each of the plurality of cells, information about the class of the predicted object, and information about a bounding box corresponding to an area where the predicted object is expected to exist Learning method of an object detection model comprising a.
- 제1항에 있어서,The method of claim 1,상기 복수의 셀들 각각에서 예측된 객체 정보 중 중복되는 바운딩 박스의 영역을 제거함으로써 최종적인 객체의 검출 영역을 결정하는 단계Determining a final detection area of an object by removing an area of an overlapping bounding box from among the object information predicted in each of the plurality of cells를 더 포함하는 객체 검출 모델의 학습 방법.Learning method of an object detection model further comprising.
- 제1항에 있어서,The method of claim 1,상기 복수의 셀들 각각과 미리 설정된 그라운드 트루스(Ground Truth, 이하 GT) 박스 간 거리 정보를 이용하여 상기 객체 검출 모델의 학습 손실(Loss)을 계산하는 단계Calculating a learning loss (Loss) of the object detection model using distance information between each of the plurality of cells and a preset ground truth (GT) box를 더 포함하는 객체 검출 모델의 학습 방법.Learning method of an object detection model further comprising.
- 제4항에 있어서,The method of claim 4,상기 객체 검출 모델의 학습 손실을 계산하는 단계는,Calculating the learning loss of the object detection model,상기 복수의 셀들 각각과 상기 GT 박스 간 거리에 기초하여 상기 복수의 셀들 각각과 상기 GT 박스 간 매칭을 수행하는 단계; 및Performing matching between each of the plurality of cells and the GT box based on a distance between each of the plurality of cells and the GT box; And상기 GT 박스와의 매칭이 수행된 셀에서 예측된 객체 정보와 상기 GT 박스에 대응하는 객체 정보를 이용하여 손실 함수(Loss function)를 계산함으로써 상기 객체 검출 모델의 학습 손실을 계산하는 단계Calculating a learning loss of the object detection model by calculating a loss function using object information predicted from the cell in which matching with the GT box is performed and object information corresponding to the GT box를 포함하는 객체 검출 모델의 학습 방법.Learning method of an object detection model comprising a.
- 제5항에 있어서,The method of claim 5,상기 복수의 셀들 각각과 상기 GT 박스 간 매칭을 수행하는 단계는,The step of performing matching between each of the plurality of cells and the GT box,상기 복수의 셀들 각각의 셀 크기 및 상기 GT 박스의 크기에 대한 상대적 거리 기준 비율, 최소 거리 기준 비율 및 최대 거리 기준 비율을 이용하여 상기 복수의 셀들 각각의 중심점과 상기 GT 박스의 중심점 간 거리를 결정하는 단계; 및The distance between the center point of each of the plurality of cells and the center point of the GT box is determined using a cell size of each of the plurality of cells and a relative distance reference ratio to the size of the GT box, a minimum distance reference ratio, and a maximum distance reference ratio. The step of doing; And상기 결정된 거리가 미리 설정된 기준 이하인 셀들을 상기 GT 박스에 매칭하는 단계Matching cells whose determined distance is less than or equal to a preset reference to the GT box를 포함하는 객체 검출 모델의 학습 방법.Learning method of an object detection model comprising a.
- 제4항에 있어서,The method of claim 4,상기 계산된 객체 검출 모델의 학습 손실이 최소가 되도록 상기 객체 검출 모델의 매개 변수 값을 조절하는 단계Adjusting a parameter value of the object detection model to minimize the learning loss of the calculated object detection model를 더 포함하는 객체 검출 모델의 학습 방법.Learning method of an object detection model further comprising.
- 제1항 내지 제7항 중 어느 한 항의 방법을 실행하기 위한 프로그램이 기록된 컴퓨터에서 판독 가능한 기록 매체.A computer-readable recording medium on which a program for executing the method of any one of claims 1 to 7 is recorded.
- 객체 검출 모델이 실행되는 객체 검출 장치에 있어서,In the object detection device in which the object detection model is executed,프로세서;Processor;를 포함하고,Including,상기 객체 검출 모델은,The object detection model,일정한 크기를 가진 복수의 셀들로 분할된 타겟 이미지에 대해 서로 다른 해상도를 가진 인코딩 특징맵을 생성하는 복수의 콘볼루션 레이어(Convolution layer)들;A plurality of convolution layers for generating encoding feature maps having different resolutions for the target image divided into a plurality of cells having a constant size;상기 복수의 콘볼루션 레이어들 중 최후단에 위치한 제1 콘볼루션 레이어에 의해 생성된 저해상도의 인코딩 특징맵과 상기 제1 콘볼루션 레이어의 이전에 위치한 제2 콘볼루션 레이어에 의해 생성된 고해상도의 인코딩 특징맵을 융합함으로써 디코딩 특징맵을 생성하는 단일의 디콘볼루션 레이어(Deconvolution layer); 및A low-resolution encoding feature map generated by a first convolutional layer positioned at the last of the plurality of convolutional layers and a high-resolution encoding feature generated by a second convolutional layer positioned before the first convolutional layer A single deconvolution layer for generating a decoding feature map by fusing the maps; And상기 생성된 디코딩 특징맵을 이용하여 상기 복수의 셀들 각각에서 예측된 객체 정보를 검출하는 객체 검출 레이어An object detection layer that detects object information predicted in each of the plurality of cells using the generated decoding feature map를 포함하고,Including,상기 단일의 디콘볼루션 레이어는,The single deconvolution layer,수용 영역(Receptive field)을 증가시키기 위한 콘볼루션 레이어가 추가되는 객체 검출 장치.An object detection device to which a convolution layer for increasing a receptive field is added.
- 제9항에 있어서,The method of claim 9,상기 복수의 셀들 각각에서 예측된 객체 정보는,The object information predicted in each of the plurality of cells,상기 복수의 셀들 각각에서 예측된 객체의 유무(Objectness) 정보, 상기 예측된 객체의 클래스(Class) 정보 및 상기 예측된 객체가 존재할 것으로 예상되는 영역에 대응하는 바운딩 박스(bounding box) 정보 중 적어도 하나를 포함하는 객체 검출 모델.At least one of information about the existence of an object predicted in each of the plurality of cells, information about the class of the predicted object, and information about a bounding box corresponding to an area where the predicted object is expected to exist Object detection model comprising a.
- 제9항에 있어서,The method of claim 9,상기 프로세서는,The processor,상기 객체 검출 레이어를 통해 예측된 상기 복수의 셀들 각각의 객체 정보 중 중복되는 바운딩 박스의 영역을 제거함으로써 최종적인 객체의 검출 영역을 결정하는 객체 검출 장치.An object detection apparatus configured to determine a final detection area of an object by removing an area of an overlapping bounding box among object information of each of the plurality of cells predicted through the object detection layer.
- 제9항에 있어서,The method of claim 9,상기 프로세서는,The processor,상기 복수의 셀들 각각과 미리 설정된 그라운드 트루스(Ground Truth, 이하 GT) 박스 간 거리 정보를 이용하여 상기 객체 검출 모델의 학습 손실(Loss)을 계산하는 객체 검출 장치.An object detection apparatus for calculating a learning loss (Loss) of the object detection model by using distance information between each of the plurality of cells and a preset ground truth (GT) box.
- 제12항에 있어서,The method of claim 12,상기 프로세서는,The processor,상기 복수의 셀들 각각과 상기 GT 박스 간 거리에 기초하여 상기 복수의 셀들 각각과 상기 GT 박스 간 매칭을 수행하고, 상기 GT 박스와의 매칭이 수행된 셀에서 예측된 객체 정보와 상기 GT 박스에 대응하는 객체 정보를 이용하여 손실 함수(Loss function)를 계산함으로써 상기 객체 검출 모델의 학습 손실을 계산하는 객체 검출 장치.Matching is performed between each of the plurality of cells and the GT box based on the distance between each of the plurality of cells and the GT box, and corresponding object information predicted in the cell in which the matching with the GT box is performed and the GT box An object detection device that calculates a learning loss of the object detection model by calculating a loss function using the object information.
- 제13항에 있어서,The method of claim 13,상기 프로세서는,The processor,상기 복수의 셀들 각각의 셀 크기 및 상기 GT 박스의 크기에 대한 상대적 거리 기준 비율, 최소 거리 기준 비율 및 최대 거리 기준 비율을 이용하여 상기 복수의 셀들 각각의 중심점과 상기 GT 박스의 중심점 간 거리를 결정하고, 상기 결정된 거리가 미리 설정된 기준 이하인 셀들을 상기 GT 박스에 매칭하는 객체 검출 장치.The distance between the center point of each of the plurality of cells and the center point of the GT box is determined by using the cell size of each of the plurality of cells and the relative distance reference ratio to the size of the GT box, the minimum distance reference ratio, and the maximum distance reference ratio. And matching cells whose determined distance is less than or equal to a preset reference to the GT box.
- 제12항에 있어서,The method of claim 12,상기 프로세서는,The processor,상기 계산된 객체 검출 모델의 학습 손실이 최소가 되도록 상기 객체 검출 모델의 매개 변수 값을 조절하는 객체 검출 장치.An object detection device that adjusts a parameter value of the object detection model to minimize the learning loss of the calculated object detection model.
- 제9항에 있어서,The method of claim 9,상기 객체 검출 모델은,The object detection model,상기 복수의 콘볼루션 레이어들의 후단에 서로 다른 객체의 특성에 따라 설계된 복수의 디콘볼루션 레이어들이 병렬로 배치되는 객체 검출 장치.An object detection device in which a plurality of deconvolution layers designed according to characteristics of different objects are arranged in parallel at the rear ends of the plurality of convolution layers.
- 제16항에 있어서,The method of claim 16,상기 프로세서는,The processor,상기 병렬로 배치된 복수의 디콘볼루션 레이어들 각각에 대응하는 학습 데이터를 이용하여 상기 복수의 콘볼루션 레이어들 및 상기 복수의 디콘볼루션 레이어들을 순차적으로 반복 학습하는 객체 검출 장치.An object detection device that sequentially and repeatedly learns the plurality of convolutional layers and the plurality of deconvolutional layers using training data corresponding to each of the plurality of deconvolutional layers arranged in parallel.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR10-2019-0137353 | 2019-10-31 | ||
KR1020190137353A KR102315311B1 (en) | 2019-10-31 | 2019-10-31 | Deep learning based object detection model training method and an object detection apparatus to execute the object detection model |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021085784A1 true WO2021085784A1 (en) | 2021-05-06 |
Family
ID=75715263
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2020/007403 WO2021085784A1 (en) | 2019-10-31 | 2020-06-08 | Learning method of object detection model, and object detection device in which object detection model is executed |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR102315311B1 (en) |
WO (1) | WO2021085784A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113610056A (en) * | 2021-08-31 | 2021-11-05 | 的卢技术有限公司 | Obstacle detection method, obstacle detection device, electronic device, and storage medium |
CN113657287A (en) * | 2021-08-18 | 2021-11-16 | 河南工业大学 | Target detection method based on deep learning improved YOLOv3 |
CN115810111A (en) * | 2022-12-21 | 2023-03-17 | 武汉微创光电股份有限公司 | Background difference method and system with strong environmental interference resistance and storage medium |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113269747B (en) * | 2021-05-24 | 2023-06-13 | 浙江大学医学院附属第一医院 | Pathological image liver cancer diffusion detection method and system based on deep learning |
EP4365821A1 (en) * | 2021-07-09 | 2024-05-08 | Samsung Electronics Co., Ltd. | Image processing device and operation method thereof |
KR102589986B1 (en) * | 2021-10-05 | 2023-10-17 | 인하대학교 산학협력단 | Adversarial Super-Resolved Multi-Scale Feature Learning and Object Detector |
CN114332638B (en) * | 2021-11-03 | 2023-04-25 | 中科弘云科技(北京)有限公司 | Remote sensing image target detection method and device, electronic equipment and medium |
WO2023128323A1 (en) * | 2021-12-28 | 2023-07-06 | 삼성전자 주식회사 | Electronic device and method for detecting target object |
KR20230100927A (en) * | 2021-12-29 | 2023-07-06 | 한국전자기술연구원 | Rotational Bounding Box-based Object Detection Deep Learning Network |
WO2023221013A1 (en) * | 2022-05-19 | 2023-11-23 | 中国科学院深圳先进技术研究院 | Small object detection method and apparatus based on feature fusion, device, and storage medium |
KR102589551B1 (en) * | 2022-10-12 | 2023-10-13 | 중앙대학교 산학협력단 | Multi-scale object detection method and device |
KR102539680B1 (en) * | 2023-01-27 | 2023-06-02 | 주식회사 컴패니언즈 | Method for recognizing of objects from images using artificial intelligence models |
KR20240145793A (en) | 2023-03-28 | 2024-10-07 | 국방과학연구소 | Method and apparatus for evaluating information quantity of learning data |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180126220A (en) * | 2017-05-17 | 2018-11-27 | 삼성전자주식회사 | Method and device for identifying an object |
KR101932009B1 (en) * | 2017-12-29 | 2018-12-24 | (주)제이엘케이인스펙션 | Image processing apparatus and method for multiple object detection |
-
2019
- 2019-10-31 KR KR1020190137353A patent/KR102315311B1/en active IP Right Grant
-
2020
- 2020-06-08 WO PCT/KR2020/007403 patent/WO2021085784A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20180126220A (en) * | 2017-05-17 | 2018-11-27 | 삼성전자주식회사 | Method and device for identifying an object |
KR101932009B1 (en) * | 2017-12-29 | 2018-12-24 | (주)제이엘케이인스펙션 | Image processing apparatus and method for multiple object detection |
Non-Patent Citations (3)
Title |
---|
DONG, CHAO ET AL.: "Accelerating the Super-Resolution Convolutional Neural Network", DEPARTMENT OF INFORMATION ENGINEERING, 1 August 2016 (2016-08-01), pages 1 - 17 * |
WANG PANQU; CHEN PENGFEI; YUAN YE; LIU DING; HUANG ZEHUA; HOU XIAODI; COTTRELL GARRISON: "Understanding Convolution for Semantic Segmentation", 2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), IEEE, 12 March 2018 (2018-03-12), pages 1451 - 1460, XP033337768, DOI: 10.1109/WACV.2018.00163 * |
WOFK DIANA; MA FANGCHANG; YANG TIEN-JU; KARAMAN SERTAC; SZE VIVIENNE: "FastDepth: Fast Monocular Depth Estimation on Embedded Systems", 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), IEEE, 20 May 2019 (2019-05-20), pages 6101 - 6108, XP033594161, DOI: 10.1109/ICRA.2019.8794182 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113657287A (en) * | 2021-08-18 | 2021-11-16 | 河南工业大学 | Target detection method based on deep learning improved YOLOv3 |
CN113610056A (en) * | 2021-08-31 | 2021-11-05 | 的卢技术有限公司 | Obstacle detection method, obstacle detection device, electronic device, and storage medium |
CN113610056B (en) * | 2021-08-31 | 2024-06-07 | 的卢技术有限公司 | Obstacle detection method, obstacle detection device, electronic equipment and storage medium |
CN115810111A (en) * | 2022-12-21 | 2023-03-17 | 武汉微创光电股份有限公司 | Background difference method and system with strong environmental interference resistance and storage medium |
Also Published As
Publication number | Publication date |
---|---|
KR102315311B1 (en) | 2021-10-19 |
KR20210051722A (en) | 2021-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021085784A1 (en) | Learning method of object detection model, and object detection device in which object detection model is executed | |
CN111476309B (en) | Image processing method, model training method, device, equipment and readable medium | |
WO2022050473A1 (en) | Apparatus and method for estimating camera pose | |
JP7567049B2 (en) | Point cloud division method, device, equipment, and storage medium | |
WO2020071701A1 (en) | Method and device for detecting object in real time by means of deep learning network model | |
WO2021133001A1 (en) | Semantic image inference method and device | |
WO2019132589A1 (en) | Image processing device and method for detecting multiple objects | |
WO2019240900A1 (en) | Attention loss based deep neural network training | |
WO2024012255A1 (en) | Semantic segmentation model training method and apparatus, electronic device, and storage medium | |
KR101963404B1 (en) | Two-step optimized deep learning method, computer-readable medium having a program recorded therein for executing the same and deep learning system | |
WO2021153861A1 (en) | Method for detecting multiple objects and apparatus therefor | |
WO2018012729A1 (en) | Display device and text recognition method for display device | |
CN110222726A (en) | Image processing method, device and electronic equipment | |
WO2021225296A1 (en) | Method for explainable active learning, to be used for object detector, by using deep encoder and active learning device using the same | |
CN113033682B (en) | Video classification method, device, readable medium and electronic equipment | |
WO2022055099A1 (en) | Anomaly detection method and device therefor | |
WO2020246655A1 (en) | Situation recognition method and device for implementing same | |
WO2022216109A1 (en) | Method and electronic device for quantizing dnn model | |
WO2019117393A1 (en) | Learning apparatus and method for depth information generation, depth information generation apparatus and method, and recording medium related thereto | |
CN113610034B (en) | Method and device for identifying character entities in video, storage medium and electronic equipment | |
CN115130456A (en) | Sentence parsing and matching model training method, device, equipment and storage medium | |
WO2023136417A1 (en) | Method and device for constructing transformer model for video story question answering | |
WO2022191366A1 (en) | Electronic device and method of controlling same | |
CN111353536B (en) | Image labeling method and device, readable medium and electronic equipment | |
CN117671476A (en) | Image layering method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20881006 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 20881006 Country of ref document: EP Kind code of ref document: A1 |