WO2014175480A1 - Hardware apparatus and method for generating integral image - Google Patents

Hardware apparatus and method for generating integral image Download PDF

Info

Publication number
WO2014175480A1
WO2014175480A1 PCT/KR2013/003520 KR2013003520W WO2014175480A1 WO 2014175480 A1 WO2014175480 A1 WO 2014175480A1 KR 2013003520 W KR2013003520 W KR 2013003520W WO 2014175480 A1 WO2014175480 A1 WO 2014175480A1
Authority
WO
WIPO (PCT)
Prior art keywords
integrated image
image
offset
value
pixel
Prior art date
Application number
PCT/KR2013/003520
Other languages
French (fr)
Korean (ko)
Inventor
최병호
김제우
이상설
황영배
장성준
김정호
Original Assignee
전자부품연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 전자부품연구원 filed Critical 전자부품연구원
Priority to PCT/KR2013/003520 priority Critical patent/WO2014175480A1/en
Publication of WO2014175480A1 publication Critical patent/WO2014175480A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis

Definitions

  • the present invention relates to a hardware device and a method for generating an integrated image, and more particularly, to the field of object, scene recognition and hardware system-on-chip technology.
  • SURF Speeded Up Robust Feature
  • the SURF algorithm regenerates the integrated image internally based on the input black and white image to perform object and scene recognition.
  • the maximum cumulative value also increases.
  • the number of bits required to express this increases, which increases the total memory usage.
  • a hardware device is a hardware device for calculating a feature point and a descriptor of an image, and includes: an image fading unit for fading the entire input black and white image to reduce the maximum number of bits to a predefined bit, the image fading unit An integrated image generator which sequentially calculates pixel values of the black and white image output from the buffer and buffers the pixel values accumulated up to the previous line as an offset, and stores the pixel values accumulated from the first pixel of the next line as a difference value, and the offset And a first integrated value obtaining unit configured to calculate pixel values of a first integrated image required to extract the feature points using the difference values.
  • the hardware device is the hardware device
  • An offset buffer in which an offset, which is an accumulated pixel value of a last row or a last column of a previous line, of the black and white image output from the image fading unit is stored, and the last row of the next line calculated from the first pixel of the next line of the previous line or
  • a first integrated image memory in which a difference value, which is a accumulated pixel value of a last column, is stored;
  • the first integrated value obtaining unit The first integrated value obtaining unit
  • the pixel values of the first integrated image may be calculated by adding the difference values.
  • An offset which is an integral value of accumulated pixels of the last row or the last column in the group, may be indexed and stored for each group in a group unit including one or more lines.
  • the first integrated value acquisition unit is a plurality of scale units
  • the plurality of first integration value acquirers may share the offset buffer and the first integrated image memory.
  • the hardware device is the hardware device
  • a second integrated image memory which is calculated from the first pixel of the next line of the previous line and stores the difference value, which is the accumulated pixel value of the last row or last column of the next line, and a second necessary to access the offset buffer to generate the descriptor.
  • Obtain an offset corresponding to a second integrated image access the second integrated image memory to obtain a difference value of a line following the offset, and add the obtained offset and the obtained difference value to sum the pixel value of the second integrated image; Can be calculated.
  • an integrated image generating method is a method of generating an integrated image for calculating a feature point and a descriptor of an input image by a hardware device, and the maximum number of bits for the entire input black and white image is defined as a predetermined bit.
  • Performing fading fading; calculating an offset which is a cumulative pixel value of a last row or a last column of a previous line of the faded monochrome image; the next line calculated and accumulated from the first pixel of a next line of the previous line Calculating a difference value that is a pixel value of a last row or a last column of, and calculating a pixel value of the integrated image by using the offset and the difference value.
  • Calculating the pixel value of the first integrated image by summing the offset and the first difference value, and calculating the pixel value of the second integrated image by summing the offset and the second difference value. can do.
  • the number of bits of the input image is reduced through a fading technique.
  • the pixel integrated value of the input image is stored as an offset by storing the accumulated value in the last column of one line separately, and when the integrated image of the new line is generated, the offset is accumulated again from the first pixel.
  • FIG. 1 is a block diagram of a SURF hardware device according to an embodiment of the present invention.
  • FIG. 2 is a configuration diagram of an offset buffer according to an embodiment of the present invention.
  • FIG. 3 is a block diagram of an integrated image memory according to an exemplary embodiment of the present invention.
  • FIG. 4 is a diagram illustrating an implementation of a scale processor and a feature point extractor according to an exemplary embodiment of the present invention.
  • FIG. 5 is a flowchart illustrating a method of generating and obtaining an integrated image according to an exemplary embodiment of the present invention.
  • ... unit means a unit for processing at least one function or operation, which may be implemented in hardware or software or a combination of hardware and software.
  • FIG. 1 is a block diagram of a speeded up robust feature (SURF) hardware device according to an embodiment of the present invention
  • Figure 2 is a block diagram of an offset buffer according to an embodiment of the present invention
  • Figure 3 is an embodiment of the present invention 4 is a block diagram of an integrated image memory according to an embodiment of the present invention
  • FIG. 4 is a diagram illustrating an implementation of a scale processor and a feature point extractor according to an exemplary embodiment of the present invention.
  • SURF speeded up robust feature
  • the SURF hardware device 100 generates a feature point and a descriptor.
  • image pyramids are generated to represent a scale space for extracting feature points, and feature points are extracted from the generated image pyramid.
  • the integral image used is the core of the SURF algorithm.
  • the SURF hardware apparatus 100 includes an image storage unit 101, an image fading unit 103, an integrated image generation unit 105, an offset buffer 107, a first integrated image memory 109, and a second integrated image memory ( 111, the scale processing unit 113, the first integrated value obtaining unit 115, the Hessian calculating unit 117, the feature point extracting unit 119, the feature point storing unit 121, and the second integrated value obtaining unit 123. , Rotation calculation unit 125, descriptor calculation unit 127, and descriptor storage unit 129.
  • the left and right sides are generated as two large blocks with the feature point storage unit 121 as a boundary, and the left side is divided into the feature point extraction unit and the right side the descriptor generation unit.
  • the image storage unit 101 stores the input black and white image.
  • the image fading unit 103 fades the entire black-and-white image stored in the image storage unit 101 to reduce the maximum number of bits to a predetermined bit.
  • the analog input is quantized from 0 to 255 to use digital values.
  • the fading technique is to reduce the number of 256 quantization levels.
  • the contrast or color of the image may change, but the image itself does not change significantly. Since object recognition is not based on the contrast or the color of the image, the recognition rate has little effect on the number of quantization levels.
  • the representation bits are reduced from 8 bits to 6 bits, and the use of integrated image memory is reduced by about 8% in 640 ⁇ 480 resolution images.
  • the integrated image generator 105 sequentially calculates and accumulates pixel values of the pixels constituting the faded monochrome image received from the image fading unit 103 in the line direction.
  • the line may include a plurality of pixels arranged in units of rows of an image or a plurality of pixels arranged in units of columns of an image.
  • the integrated image generation unit 105 sequentially calculates and accumulates pixel values in group units including one or more lines. At this time, the accumulated pixel value of the last row or the last column of the group is calculated as an offset of the group and stored in the offset buffer 107.
  • the integrated image generator 105 calculates a differential value based on the offset and stores the differential value in the first integrated image memory 109 and the second integrated image memory 121, respectively.
  • the integrated image generating unit 105 newly calculates and accumulates the difference value starting from the pixel value of the first row or the first column of the next line of the previous line regardless of the accumulated pixel value of the previous line.
  • the offset buffer 107 indexes and stores the offset calculated by the integrated image generator 105 for each group.
  • the offset buffer 107 is implemented as a register, a collision does not occur even when the feature extraction unit and the descriptor generation unit simultaneously access.
  • the offset buffer 107 may be implemented as shown in FIG. 3.
  • the integrated image generation unit 105 may include an offset P1 buffered in group units including N lines (Row or Column # 0 to Row or Column #N). As shown in (b), they are stored in the offset buffer 107 for each group.
  • the first group RG # 0 includes N lines (Row or Column # 0 to Row or Column #N), and the offset of the first group RG # 0 is the Nth line (Row or Column #).
  • the accumulated pixel value of the last row or last column of N) is stored.
  • the offset of the second group RG # 1 stores the accumulated pixel value of the last row or the last column of the 2N th line (Row or Column # 2N).
  • the representation bit of the pixel value of the integrated image is reduced from the conventional 27 bits to 19 bits. This can reduce the usage of integrated image memory by about 30%.
  • the first integrated image memory 109 stores the difference value of the integrated image required for feature point extraction.
  • the second integrated image memory 111 stores the difference value of the integrated image required for the descriptor extraction.
  • first integral value obtaining unit 115 and the second integral value obtaining unit 123 are interfaced with a FIFO, the first integral value obtaining unit 115 and the second integral value obtaining unit 123 mutually interact with each other. Can work independently
  • the first integrated image memory 109 is a dedicated memory for feature point extraction.
  • the plurality of scale processing units 113 are memories shared with each other.
  • the first integrated image memory 109 filters the pixel values of the integrated image in a specific rectangular area within the faded input black and white image, but the size of the specific rectangular area is not large.
  • the specific rectangular area may be a rectangular box having a size of 51 ⁇ 51 or more.
  • the second integrated image memory 111 is a dedicated memory for descriptor extraction.
  • the second integrated image memory 111 stores all the pixel values of the integrated image of the faded input black and white image frame. At this time, the integrated image of a considerably larger area is stored around the feature point.
  • first integrated image memory 109 and the second integrated image memory 111 may be implemented as shown in FIG. 4.
  • differential values are indexed and stored for each entire line (Row or Column # 0 to Row or Column #N) of the image. That is, the difference value is a cumulative pixel value of the last row or the last column that is sequentially calculated from the pixel values of the first row or the first column (0,0) of the line corresponding to each line.
  • the scale processor 113 performs parallel operations on a plurality of scales at the time of feature point extraction.
  • the scale is a filtering operation for a specific rectangular region of the first integrated image memory 109 described above, and may include, for example, six scales. Therefore, the scale processor 113 may be implemented in plural as the number of scales. It is implemented as a pipeline and can be composed of six or more parallel operations.
  • each scale processing unit 113 filters by varying the size of a specific rectangle.
  • each scale processor 113 may be implemented with filter sizes of 9 ⁇ 9, 15 ⁇ 15, 21 ⁇ 21, 27 ⁇ 27, 39 ⁇ 39, and 51 ⁇ 51.
  • Each of the scale processor 113 includes a first integral value acquirer 115 and a Hessian calculator 117.
  • the first integrated value obtaining unit 115 adds the offset obtained by approaching the offset buffer 107 and the difference value obtained by accessing the first integrated image memory 109, and then, in the Hessian calculation unit 117. Calculate the pixel value of the required integrated image.
  • the first integrated value acquirer 115 obtains, from the offset buffer 107, an index indexed to a corresponding group among a plurality of lines constituting the integrated image required by the Hessian calculator 117. Then, the difference value of the next line of the group is obtained from the first integrated image memory 109. The offset and difference values thus obtained are added to calculate the integrated image pixel value and output to the Hessian calculator 117.
  • the first integrated value obtaining unit 115 may offset the first group RG # 0 and the next line Row or Column N + 1 of the first group RG # 0.
  • the integrated image pixel value may be calculated by calculating the difference value of.
  • the Hessian calculator 117 calculates a Hessian determinant by performing a box filter operation using the integrated image pixel value received from the first integral value obtainer 115.
  • the feature point extractor 119 extracts the feature points by generating an image pyramid using the Hessian determinant calculated by the Hessian calculator 117.
  • the feature point extractor 119 may be formed in plural in correspondence with the number of the scale processor 113, and may be implemented as shown in FIG. 5.
  • the feature point extracting unit 119 is implemented as a total of four. That is, the indices of the six scale processing units 113 are S0, S1, S2, S3, S4, and S5, respectively, and the indices of the four feature point extracting units 119 are F0, F1, F2, and F3, respectively.
  • the scale processing unit 113 is six, it is obvious that the first integral value obtaining unit 115 and the Hessian calculating unit 117 are also implemented in six.
  • one feature point extractor 119 extracts the final feature point based on the feature points output by the three scale processing units 113.
  • the feature point storage unit 121 stores the feature points extracted by the feature point extractor 119 and is implemented as a FIFO (First In First Out).
  • the second integral value obtaining unit 123 obtains, from the offset buffer 107, an offset indexed to a corresponding group among a plurality of lines constituting the integrated image required for the descriptor generation.
  • the difference value of the next line of the group is obtained from the second integrated image memory 111.
  • the offset and difference values thus obtained are added to calculate the integrated image pixel values and output to the rotation calculator 125 and the descriptor calculator 127, respectively.
  • the rotation calculation unit 125 calculates a rotation value based on the integrated image pixel value obtained by the second integration value acquisition unit 123. Based on the coordinates and scales of the feature points extracted by the feature point storage unit 121, the main direction of the feature points is calculated based on the integral value of the specific area. This shows how much of the current feature point is rotated, which is used as a criterion for determining the area of the integral value used when calculating the descriptor.
  • the descriptor calculating unit 127 calculates the descriptor based on the integrated image pixel value obtained by the second integral value obtaining unit 123.
  • the descriptor is assigned to the feature point stored in the feature point storage unit 121.
  • Descriptor uses integral value of specific area according to scale around feature point, and after Harr-wavelet filtering, it is calculated and expressed in total of 4, dx, dy,
  • the descriptor storage unit 129 stores descriptors calculated by the descriptor calculator 127.
  • FIG. 5 is a flowchart illustrating a method of generating and obtaining an integrated image according to an exemplary embodiment of the present invention. That is, a flowchart illustrating an integrated image generation and acquisition operation of the SURF hardware device 100 described above.
  • the image fading unit 103 reduces the maximum number of bits to a predetermined bit by applying a fading technique to the entire black-and-white image received (S101).
  • the integrated image generator 105 sequentially calculates pixel values in a group unit including one or more lines of a faded black and white image, and offsets the accumulated pixel values of the last row or the last column of the group. The calculation is performed (S103) and stored in the offset buffer 107 (S105).
  • the integrated image generation unit 105 calculates (S107) the accumulated pixel value of the last row or the last column of the line for each line as a differential value of the corresponding line, and thereby the first integrated image memory 109 and the second. Each is stored in the integrated image memory 121 (S109).
  • the first integral value obtaining unit 115 obtains, from the offset buffer 107, an offset indexed to a corresponding group among a plurality of lines constituting an integrated image required by the Hessian calculating unit 117 (S111). ). The difference value of the next line of the group is obtained from the first integrated image memory 109 (S113).
  • the first integrated value obtainer 115 calculates an integrated image pixel value by adding the offset acquired in step S111 and the difference value obtained in step S113 (S115) and outputs the integrated image pixel value to the Hessian calculator 117.
  • the second integrated value obtaining unit 123 obtains, from the offset buffer 107, an offset indexed to a corresponding group among a plurality of lines constituting the integrated image required for the descriptor generation (S117).
  • the difference value of the next line of the group is obtained from the second integrated image memory 111 (S119).
  • the second integral value obtaining unit 123 calculates the integrated image pixel value by adding the offset obtained in step S119 and the difference value obtained in step S121 (S121), and then rotates the calculating unit 125 and the descriptor calculating unit ( 127) respectively.
  • the embodiments of the present invention described above are not only implemented through the apparatus and the method, but may be implemented through a program for realizing a function corresponding to the configuration of the embodiments of the present invention or a recording medium on which the program is recorded.

Abstract

Disclosed are a hardware apparatus and a method for generating an integral image. Herein, a hardware apparatus for calculating the features and a descriptor of an image comprises: an image fading unit for fading the entirety of a received black and white image and reducing the maximum number of bits to a previously defined number of bits; an integral image generation unit for sequentially calculating the pixel values of the black and white image outputted by the image fading unit, and buffering, by an offset, the accumulated pixel values up to the previous line, and then storing, as a differential value, the pixel values accumulated starting from the first pixel of the next line; and a first integral value acquisition unit for calculating the pixel values of a first integral image needed to extract the features by means of the offset and the differential value.

Description

하드웨어 장치 및 적분 이미지 생성 방법How to Create Hardware Devices and Integral Images
본 발명은 하드웨어 장치 및 적분 이미지 생성 방법에 관한 것으로, 특히, 객체·장면 인식 및 하드웨어·시스템-온-칩 기술 분야에 관한 것이다.TECHNICAL FIELD The present invention relates to a hardware device and a method for generating an integrated image, and more particularly, to the field of object, scene recognition and hardware system-on-chip technology.
SURF(Speeded Up Robust Feature) 알고리즘은 사물의 특징점을 추출하는 대표적인 알고리즘 중의 하나이다.SURF (Speeded Up Robust Feature) algorithm is one of representative algorithms for extracting feature points of objects.
SURF 알고리즘은 입력 흑백 영상을 토대로 내부적으로 적분 이미지를 재생성하여 객체·장면 인식을 수행한다. 그런데 입력 해상도가 커짐에 따라 최대 누적값 또한 커진다. 결국 이를 표현하기 위해 필요로 하는 비트 수가 커져 전체 메모리 사용량이 늘어나는 문제가 있다.The SURF algorithm regenerates the integrated image internally based on the input black and white image to perform object and scene recognition. However, as the input resolution increases, the maximum cumulative value also increases. As a result, the number of bits required to express this increases, which increases the total memory usage.
640×480 해상도의 경우, 적분 이미지의 필요한 최대 비트 수는 27 비트가 된다. 27비트의 307200(640×480)개 값을 저장하는 것은 하드웨어 및 시스템-온-칩 구현에 있어 현실적으로 불가능하다. 알고리즘의 구현을 위해서 이러한 적분 이미지 메모리 사용량 감소 방법이 절실히 요구되고 있다. For 640x480 resolution, the required maximum number of bits in the integrated image is 27 bits. Storing 307200 (640 x 480) values of 27 bits is practically impossible for hardware and system-on-chip implementations. In order to implement the algorithm, an integrated image memory usage reduction method is urgently needed.
따라서, 본 발명이 이루고자 하는 기술적 과제는 적분 이미지 메모리의 크기를 감소시키는 하드웨어 장치 및 적분 이미지 생성 방법을 제공하는 것이다.Accordingly, it is an object of the present invention to provide a hardware device and an integrated image generating method for reducing the size of an integrated image memory.
본 발명의 하나의 특징에 따르면, 하드웨어 장치는 영상의 특징점 및 서술자를 연산하는 하드웨어 장치로서, 입력받은 흑백 영상 전체를 페이딩시켜 최대 비트 수를 기 정의된 비트로 감소시키는 영상 페이딩부, 상기 영상 페이딩부로부터 출력되는 흑백 영상의 화소값을 순차적으로 계산하여 이전 라인까지 누적된 화소값을 오프셋으로 버퍼링하며, 다음 라인의 처음 화소부터 누적된 화소값을 차분값으로 저장하는 적분 이미지 생성부, 그리고 상기 오프셋 및 상기 차분값을 이용하여 상기 특징점을 추출하는데 필요한 제1 적분 이미지의 화소값을 산출하는 제1 적분값 획득부를 포함한다.According to an aspect of the present invention, a hardware device is a hardware device for calculating a feature point and a descriptor of an image, and includes: an image fading unit for fading the entire input black and white image to reduce the maximum number of bits to a predefined bit, the image fading unit An integrated image generator which sequentially calculates pixel values of the black and white image output from the buffer and buffers the pixel values accumulated up to the previous line as an offset, and stores the pixel values accumulated from the first pixel of the next line as a difference value, and the offset And a first integrated value obtaining unit configured to calculate pixel values of a first integrated image required to extract the feature points using the difference values.
상기 하드웨어 장치는, The hardware device,
상기 영상 페이딩부로부터 출력되는 흑백 영상의 이전 라인의 마지막 행 또는 마지막 열의 누적된 화소값인 오프셋이 저장되는 오프셋 버퍼, 그리고 상기 이전 라인의 다음 라인의 처음 화소부터 계산되어 상기 다음 라인의 마지막 행 또는 마지막 열의 누적된 화소값인 차분값이 저장되는 제1 적분 이미지 메모리를 더 포함하고,An offset buffer in which an offset, which is an accumulated pixel value of a last row or a last column of a previous line, of the black and white image output from the image fading unit is stored, and the last row of the next line calculated from the first pixel of the next line of the previous line or A first integrated image memory in which a difference value, which is a accumulated pixel value of a last column, is stored;
상기 제1 적분값 획득부는,The first integrated value obtaining unit,
상기 오프셋 버퍼로 접근하여 상기 제1 적분 이미지에 해당하는 이전 라인의 오프셋을 획득하고, 상기 제1 적분 이미지 메모리로 접근하여 상기 이전 라인의 다음 라인의 차분값을 획득하며, 획득한 오프셋 및 획득한 차분값을 합산하여 상기 제1 적분 이미지의 화소값을 계산할 수 있다.Access to the offset buffer to obtain an offset of a previous line corresponding to the first integrated image; access to the first integrated image memory to obtain a difference value of a next line of the previous line; The pixel values of the first integrated image may be calculated by adding the difference values.
상기 오프셋 버퍼는,The offset buffer,
하나 이상의 라인을 포함하는 그룹 단위로 상기 그룹 내 마지막 행 또는 마지막 열의 누적된 화소의 적분값인 오프셋이 상기 그룹 별로 인덱싱되어 저장될 수 있다.An offset, which is an integral value of accumulated pixels of the last row or the last column in the group, may be indexed and stored for each group in a group unit including one or more lines.
상기 제1 적분값 획득부는 스케일 단위로 복수개이고,The first integrated value acquisition unit is a plurality of scale units,
상기 오프셋 버퍼 및 상기 제1 적분 이미지 메모리를 복수개의 상기 제1 적분값 획득부가 서로 공유할 수 있다.The plurality of first integration value acquirers may share the offset buffer and the first integrated image memory.
상기 하드웨어 장치는, The hardware device,
상기 이전 라인의 다음 라인의 처음 화소부터 계산되어 상기 다음 라인의 마지막 행 또는 마지막 열의 누적된 화소값인 차분값이 저장되는 제2 적분 이미지 메모리 그리고 상기 오프셋 버퍼로 접근하여 상기 서술자를 생성하는데 필요한 제2 적분 이미지에 해당하는 오프셋을 획득하고, 상기 제2 적분 이미지 메모리로 접근하여 상기 오프셋 다음 라인의 차분값을 획득하며, 획득한 오프셋 및 획득한 차분값을 합산하여 상기 제2 적분 이미지의 화소값을 계산할 수 있다.A second integrated image memory which is calculated from the first pixel of the next line of the previous line and stores the difference value, which is the accumulated pixel value of the last row or last column of the next line, and a second necessary to access the offset buffer to generate the descriptor. Obtain an offset corresponding to a second integrated image, access the second integrated image memory to obtain a difference value of a line following the offset, and add the obtained offset and the obtained difference value to sum the pixel value of the second integrated image; Can be calculated.
본 발명의 다른 특징에 따르면, 적분 이미지 생성 방법은 하드웨어 장치가 입력받은 영상의 특징점 및 서술자를 연산하기 위한 적분 이미지를 생성하는 방법으로서, 입력받은 흑백 영상 전체에 대해 최대 비트 수를 기 정의된 비트로 감소시키는 페이딩을 수행하는 단계, 상기 페이딩된 흑백 영상의 이전 라인의 마지막 행 또는 마지막 열의 누적된 화소값인 오프셋을 계산하는 단계, 상기 이전 라인의 다음 라인의 처음 화소부터 계산되어 누적된 상기 다음 라인의 마지막 행 또는 마지막 열의 화소값인 차분값을 계산하는 단계, 그리고 상기 오프셋 및 상기 차분값을 이용하여 상기 적분 이미지의 화소값을 산출하는 단계를 포함한다.According to another aspect of the present invention, an integrated image generating method is a method of generating an integrated image for calculating a feature point and a descriptor of an input image by a hardware device, and the maximum number of bits for the entire input black and white image is defined as a predetermined bit. Performing fading fading; calculating an offset which is a cumulative pixel value of a last row or a last column of a previous line of the faded monochrome image; the next line calculated and accumulated from the first pixel of a next line of the previous line Calculating a difference value that is a pixel value of a last row or a last column of, and calculating a pixel value of the integrated image by using the offset and the difference value.
상기 차분값을 계산하는 단계는, The step of calculating the difference value,
상기 특징점을 추출하는데 필요한 제1 적분 이지미의 제1 차분값을 계산하여 저장하는 단계, 그리고 상기 서술자를 추출하는데 필요한 제2 적분 이미지의 제2 차분값을 계산하여 저장하는 단계를 포함할 수 있다.And calculating and storing a first difference value of the first integrated image required to extract the feature point, and calculating and storing a second difference value of the second integrated image required to extract the descriptor.
상기 산출하는 단계는,The calculating step,
상기 오프셋 및 상기 제1 차분값을 합산하여 상기 제1 적분 이미지의 화소값을 산출하는 단계, 그리고 상기 오프셋 및 상기 제2 차분값을 합산하여 상기 제2 적분 이미지의 화소값을 산출하는 단계를 포함할 수 있다.Calculating the pixel value of the first integrated image by summing the offset and the first difference value, and calculating the pixel value of the second integrated image by summing the offset and the second difference value. can do.
본 발명의 실시예에 따르면, 페이딩 기법을 통해 입력 영상의 비트 수를 감소시킨다. 또한, 입력 영상의 화소 적분값을 한 라인의 마지막 열에서의 누적된 값을 별도로 저장하여 오프셋으로 사용하고, 새로운 라인의 적분 이미지를 생성 시에는 바로 이 오프셋을 가지고 있기 때문에 처음 화소부터 다시 누적하여 차분값을 형성하며, 오프셋 및 차분값을 합산하여 적분 이미지를 생성함으로써, 전체 화소의 적분값으로 모두 저장하지 않아도 되어 적분 이미지 메모리의 크기를 감소시킬 수 있다.According to an embodiment of the present invention, the number of bits of the input image is reduced through a fading technique. In addition, the pixel integrated value of the input image is stored as an offset by storing the accumulated value in the last column of one line separately, and when the integrated image of the new line is generated, the offset is accumulated again from the first pixel. By forming the difference value and generating the integrated image by summing the offset and the difference value, it is not necessary to store all of the integrated values of all the pixels, thereby reducing the size of the integrated image memory.
도 1은 본 발명의 실시예에 따른 SURF 하드웨어 장치의 구성도이다.1 is a block diagram of a SURF hardware device according to an embodiment of the present invention.
도 2는 본 발명의 실시예에 따른 오프셋 버퍼의 구성도이다.2 is a configuration diagram of an offset buffer according to an embodiment of the present invention.
도 3은 본 발명의 실시예에 따른 적분 이미지 메모리의 구성도이다.3 is a block diagram of an integrated image memory according to an exemplary embodiment of the present invention.
도 4는 본 발명의 실시예에 따른 스케일 처리부 및 특징점 추출부의 구현 예시도이다.4 is a diagram illustrating an implementation of a scale processor and a feature point extractor according to an exemplary embodiment of the present invention.
도 5는 본 발명의 실시예에 따른 적분 이미지 생성 및 획득 방법을 나타낸 순서도이다.5 is a flowchart illustrating a method of generating and obtaining an integrated image according to an exemplary embodiment of the present invention.
아래에서는 첨부한 도면을 참고로 하여 본 발명의 실시예에 대하여 본 발명이 속하는 기술 분야에서 통상의 지식을 가진 자가 용이하게 실시할 수 있도록 상세히 설명한다. 그러나 본 발명은 여러가지 상이한 형태로 구현될 수 있으며 여기에서 설명하는 실시예에 한정되지 않는다. 그리고 도면에서 본 발명을 명확하게 설명하기 위해서 설명과 관계없는 부분은 생략하였으며, 명세서 전체를 통하여 유사한 부분에 대해서는 유사한 도면 부호를 붙였다.DETAILED DESCRIPTION Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings so that those skilled in the art may easily implement the present invention. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. In the drawings, parts irrelevant to the description are omitted in order to clearly describe the present invention, and like reference numerals designate like parts throughout the specification.
명세서 전체에서, 어떤 부분이 어떤 구성 요소를 "포함"한다고 할 때, 이는 특별히 반대되는 기재가 없는 한 다른 구성 요소를 제외하는 것이 아니라 다른 구성 요소를 더 포함할 수 있는 것을 의미한다.Throughout the specification, when a part is said to "include" a certain component, it means that it can further include other components, except to exclude other components unless specifically stated otherwise.
또한, 명세서에 기재된 "…부", "…모듈" 의 용어는 적어도 하나의 기능이나 동작을 처리하는 단위를 의미하며, 이는 하드웨어나 소프트웨어 또는 하드웨어 및 소프트웨어의 결합으로 구현될 수 있다.In addition, the terms "… unit", "... module" described in the specification means a unit for processing at least one function or operation, which may be implemented in hardware or software or a combination of hardware and software.
이하, 도면을 참조로 하여 본 발명의 실시예에 따른 하드웨어 장치 및 적분 이미지 생성 방법에 대하여 상세히 설명한다.Hereinafter, a hardware device and an integrated image generating method according to an exemplary embodiment of the present invention will be described in detail with reference to the accompanying drawings.
도 1은 본 발명의 실시예에 따른 SURF(Speeded Up Robust Feature) 하드웨어 장치의 구성도이고, 도 2는 본 발명의 실시예에 따른 오프셋 버퍼의 구성도이고, 도 3은 본 발명의 실시예에 따른 적분 이미지 메모리의 구성도이며, 도 4는 본 발명의 실시예에 따른 스케일 처리부 및 특징점 추출부의 구현 예시도이다.1 is a block diagram of a speeded up robust feature (SURF) hardware device according to an embodiment of the present invention, Figure 2 is a block diagram of an offset buffer according to an embodiment of the present invention, Figure 3 is an embodiment of the present invention 4 is a block diagram of an integrated image memory according to an embodiment of the present invention, and FIG. 4 is a diagram illustrating an implementation of a scale processor and a feature point extractor according to an exemplary embodiment of the present invention.
먼저, 도 1을 참조하면, SURF 하드웨어 장치(100)는 특징점(Feature point) 및 서술자(Descriptor)를 생성한다. 이때, 특징점을 추출하는 스케일 공간(Scale space)을 표현하기 위해 이미지 피라미드(Image pyramids)를 생성하고 생성된 이미지 피라미드에서 특징점을 추출한다. 이때, 사용되는 적분 이미지(Integral image)는 SURF 알고리즘의 핵심이다. First, referring to FIG. 1, the SURF hardware device 100 generates a feature point and a descriptor. At this time, image pyramids are generated to represent a scale space for extracting feature points, and feature points are extracted from the generated image pyramid. At this time, the integral image used is the core of the SURF algorithm.
SURF 하드웨어 장치(100)는 영상 저장부(101), 영상 페이딩부(103), 적분 이미지 생성부(105), 오프셋 버퍼(107), 제1 적분 이미지 메모리(109), 제2 적분 이미지 메모리(111), 스케일 처리부(113), 제1 적분값 획득부(115), 헤이시안 계산부(117), 특징점 추출부(119), 특징점 저장부(121), 제2 적분값 획득부(123), 회전 계산부(125), 서술자 계산부(127) 및 서술자 저장부(129)를 포함한다.The SURF hardware apparatus 100 includes an image storage unit 101, an image fading unit 103, an integrated image generation unit 105, an offset buffer 107, a first integrated image memory 109, and a second integrated image memory ( 111, the scale processing unit 113, the first integrated value obtaining unit 115, the Hessian calculating unit 117, the feature point extracting unit 119, the feature point storing unit 121, and the second integrated value obtaining unit 123. , Rotation calculation unit 125, descriptor calculation unit 127, and descriptor storage unit 129.
이때, 특징점 저장부(121)를 경계로 좌우 측 2개의 큰 블록으로 생성되는데,좌측은 특징점 추출부, 우측은 서술자 생성부로 구분된다.At this time, the left and right sides are generated as two large blocks with the feature point storage unit 121 as a boundary, and the left side is divided into the feature point extraction unit and the right side the descriptor generation unit.
영상 저장부(101)는 입력받은 흑백 영상을 저장한다.The image storage unit 101 stores the input black and white image.
영상 페이딩부(103)는 영상 저장부(101)에 저장된 입력받은 흑백 영상 전체를 페이딩(Fading)시켜 최대 비트 수를 기 정의된 비트로 감소시킨다.The image fading unit 103 fades the entire black-and-white image stored in the image storage unit 101 to reduce the maximum number of bits to a predetermined bit.
흑백 영상의 경우 아날로그 입력을 0부터 255까지로 양자화하여 디지털 값을 사용한다. 여기서, 페이딩 기법은 256개의 양자화 레벨 수를 감소시키는 것이다. 양자화 레벨 수를 줄였을 때, 영상의 명암이나 색은 변할 수 있지만 영상 자체는 크게 변하지 않는다. 객체 인식의 경우는 영상의 명암이나 색에 기반하는 것이 아니기 때문에 인식률에 있어 양자화 레벨 수에 의한 영향은 거의 없다. In the case of black and white video, the analog input is quantized from 0 to 255 to use digital values. Here, the fading technique is to reduce the number of 256 quantization levels. When the number of quantization levels is reduced, the contrast or color of the image may change, but the image itself does not change significantly. Since object recognition is not based on the contrast or the color of the image, the recognition rate has little effect on the number of quantization levels.
양자화 레벨 수를 256개에서 192개로 감소시켰을 경우 표현 비트는 8비트에서 6비트로 감소하고, 640×480 해상도의 영상에서 적분 이미지 메모리의 사용량은 약 8% 감소된다. When the number of quantization levels is reduced from 256 to 192, the representation bits are reduced from 8 bits to 6 bits, and the use of integrated image memory is reduced by about 8% in 640 × 480 resolution images.
적분 이미지 생성부(105)는 영상 페이딩부(103)로부터 전달받은 페이딩된 흑백 영상을 구성하는 화소들 각각의 화소값을 라인 방향으로 순차적으로 계산하여 누적한다.The integrated image generator 105 sequentially calculates and accumulates pixel values of the pixels constituting the faded monochrome image received from the image fading unit 103 in the line direction.
여기서, 라인은 영상의 행(Row) 단위로 배치된 복수개의 화소로 이루어지거나 또는 영상의 열(Column) 단위로 배치된 복수개의 화소로 이루어질 수 있다.The line may include a plurality of pixels arranged in units of rows of an image or a plurality of pixels arranged in units of columns of an image.
적분 이미지 생성부(105)는 하나 이상의 라인을 포함하는 그룹 단위로 화소값을 순차적으로 계산하여 누적한다. 이때, 그룹의 마지막 행 또는 마지막 열의 누적된 화소값을 해당 그룹의 오프셋(off-set)으로 산출하여 오프셋 버퍼(107)에 저장한다. The integrated image generation unit 105 sequentially calculates and accumulates pixel values in group units including one or more lines. At this time, the accumulated pixel value of the last row or the last column of the group is calculated as an offset of the group and stored in the offset buffer 107.
또한, 적분 이미지 생성부(105)는 오프셋에 기반하여 차분값(Defferential value)을 산출하여 제1 적분 이미지 메모리(109) 및 제2 적분 이미지 메모리(121)에 각각 저장한다.In addition, the integrated image generator 105 calculates a differential value based on the offset and stores the differential value in the first integrated image memory 109 and the second integrated image memory 121, respectively.
이때, 적분 이미지 생성부(105)는 차분값을 계산시 이전 라인의 누적된 화소값과 관계없이 이전 라인의 다음 라인의 처음 행 또는 처음 열의 화소값부터 새롭게 계산하여 누적한다.In this case, the integrated image generating unit 105 newly calculates and accumulates the difference value starting from the pixel value of the first row or the first column of the next line of the previous line regardless of the accumulated pixel value of the previous line.
오프셋 버퍼(107)는 적분 이미지 생성부(105)가 산출한 오프셋을 그룹 별로 인덱싱하여 저장한다. 그리고 오프셋 버퍼(107)는 레지스터(register)로 구현되어 특징점 추출부와 서술자 생성부의 동시 접근에도 충돌이 발생하지 않는다.The offset buffer 107 indexes and stores the offset calculated by the integrated image generator 105 for each group. In addition, since the offset buffer 107 is implemented as a register, a collision does not occur even when the feature extraction unit and the descriptor generation unit simultaneously access.
이때, 오프셋 버퍼(107)는 도 3과 같이 구현될 수 있다.In this case, the offset buffer 107 may be implemented as shown in FIG. 3.
도 3의 (a)를 참조하면, 적분 이미지 생성부(105)는 N개의 라인(Row or Column #0 ~ Row or Column #N)을 포함하는 그룹 단위로 버퍼링된 오프셋(P1)은 도 3의 (b)와 같이 그룹 별로 오프셋 버퍼(107)에 저장된다.Referring to FIG. 3A, the integrated image generation unit 105 may include an offset P1 buffered in group units including N lines (Row or Column # 0 to Row or Column #N). As shown in (b), they are stored in the offset buffer 107 for each group.
즉, 제1 그룹(RG #0)은 N개의 라인(Row or Column #0 ~ Row or Column #N)을 포함하고, 제1 그룹(RG #0)의 오프셋은 N번째 라인(Row or Column #N)의 마지막 행 또는 마지막 열의 누적된 화소값이 저장된다. 마찬가지로 제2 그룹(RG #1)의 오프셋은 2N번째 라인(Row or Column #2N)의 마지막 행 또는 마지막 열의 누적된 화소값이 저장된다.That is, the first group RG # 0 includes N lines (Row or Column # 0 to Row or Column #N), and the offset of the first group RG # 0 is the Nth line (Row or Column #). The accumulated pixel value of the last row or last column of N) is stored. Similarly, the offset of the second group RG # 1 stores the accumulated pixel value of the last row or the last column of the 2N th line (Row or Column # 2N).
이때, 640×480, 8비트 흑백 영상의 경우, 2-라인의 그룹 단위로 오프셋을 산출하면, 적분 이미지의 화소값의 표현 비트는 종래 27비트에서 19비트로 감소하게 된다. 이로 인해 적분 이미지 메모리의 사용량을 약 30% 감소시킬 수 있다.In this case, in the case of a 640 × 480, 8-bit black and white image, when the offset is calculated in a 2-line group unit, the representation bit of the pixel value of the integrated image is reduced from the conventional 27 bits to 19 bits. This can reduce the usage of integrated image memory by about 30%.
다시, 도 1을 참조하면, 제1 적분 이미지 메모리(109)는 특징점 추출에 필요한 적분 이미지의 차분값을 저장한다. Referring again to FIG. 1, the first integrated image memory 109 stores the difference value of the integrated image required for feature point extraction.
제2 적분 이미지 메모리(111)는 서술자 추출에 필요한 적분 이미지의 차분값을 저장한다. The second integrated image memory 111 stores the difference value of the integrated image required for the descriptor extraction.
적분 이미지 메모리를 이처럼 제1 적분 이미지 메모리(109) 및 제2 적분 이미지 메모리(111)로 구분하여 독립적으로 구성하는 이유는 제1 적분값 획득부(115) 및 제2 적분값 획득부(123)가 순차적으로 동작하지 않고 동시에 동작할 수 있기 때문이다. 만약 적분 이미지 메모리를 하나로 구현하면, 동시 동작시 적분 이미지 메모리 접근 충돌이 발생할 수 있다. The reason why the integrated image memory is divided into the first integrated image memory 109 and the second integrated image memory 111 and configured independently is the first integrated value obtaining unit 115 and the second integrated value obtaining unit 123. This is because can operate simultaneously without sequentially operating. If integrated image memory is implemented as one, an integrated image memory access conflict may occur during simultaneous operation.
또한, 제1 적분값 획득부(115) 및 제2 적분값 획득부(123)의 중간에 FIFO로 인터페이싱하기 때문에 제1 적분값 획득부(115) 및 제2 적분값 획득부(123)는 서로 독립적으로 동작할 수 있다. In addition, since the first integral value obtaining unit 115 and the second integral value obtaining unit 123 are interfaced with a FIFO, the first integral value obtaining unit 115 and the second integral value obtaining unit 123 mutually interact with each other. Can work independently
제1 적분 이미지 메모리(109)는 특징점 추출을 위한 전용 메모리이다. 그리고 복수의 스케일 처리부(113)가 서로 공유하는 메모리이다.The first integrated image memory 109 is a dedicated memory for feature point extraction. The plurality of scale processing units 113 are memories shared with each other.
제1 적분 이미지 메모리(109)는 페이딩된 입력 흑백 영상 내에서 특정 사각형 영역에서 적분 이미지의 화소값에 대해 필터링을 하게 되는데 특정 사각형 영역의 크기는 크지 않다. 여기서, 특정 사각형 영역은 최대 51×51 크기 혹은 그 이상의 사각형 박스 일 수 있다. The first integrated image memory 109 filters the pixel values of the integrated image in a specific rectangular area within the faded input black and white image, but the size of the specific rectangular area is not large. In this case, the specific rectangular area may be a rectangular box having a size of 51 × 51 or more.
제2 적분 이미지 메모리(111)는 서술자 추출을 위한 전용 메모리이다. 제2 적분 이미지 메모리(111)는 페이딩된 입력 흑백 영상 프레임의 적분 이미지의 화소값을 모두 저장한다. 이때, 특징점을 중심으로 상당히 큰 영역의 적분 이미지를 저장한다.The second integrated image memory 111 is a dedicated memory for descriptor extraction. The second integrated image memory 111 stores all the pixel values of the integrated image of the faded input black and white image frame. At this time, the integrated image of a considerably larger area is stored around the feature point.
여기서, 제1 적분 이미지 메모리(109) 및 제2 적분 이미지 메모리(111)는 도 4와 같이 구현될 수 있다.Here, the first integrated image memory 109 and the second integrated image memory 111 may be implemented as shown in FIG. 4.
도 4를 참조하면, 제1 적분 이미지 메모리(109) 및 제2 적분 이미지 메모리(111)는 영상의 전체 라인(Row or Column #0 ~ Row or Column #N) 별로 차분값이 인덱싱되어 저장된다. 즉, 차분값은 라인 별로 해당하는 라인의 처음 행 또는 처음 열(0,0)의 화소값부터 순차적으로 계산되어 마지막 행 또는 마지막 열의 누적된 화소값이다.Referring to FIG. 4, in the first integrated image memory 109 and the second integrated image memory 111, differential values are indexed and stored for each entire line (Row or Column # 0 to Row or Column #N) of the image. That is, the difference value is a cumulative pixel value of the last row or the last column that is sequentially calculated from the pixel values of the first row or the first column (0,0) of the line corresponding to each line.
다시, 도 1을 참조하면, 스케일 처리부(113)는 특징점 추출시에 복수개의 스케일에 대해 병렬 연산을 한다. 이때, 스케일은 전술한 제1 적분 이미지 메모리(109)의 특정 사각형 영역에 대한 필터링 작업이이고, 예컨대 6개의 스케일을 포함할 수 있다. 따라서, 스케일 처리부(113)는 스케일의 개수만큼 복수개로 구현될 수 있다. 그리고 파이프라인으로 구현되어 병렬로 6개 혹은 그 이상으로 구성되어 연산을 수행할 수 있다.Again, referring to FIG. 1, the scale processor 113 performs parallel operations on a plurality of scales at the time of feature point extraction. In this case, the scale is a filtering operation for a specific rectangular region of the first integrated image memory 109 described above, and may include, for example, six scales. Therefore, the scale processor 113 may be implemented in plural as the number of scales. It is implemented as a pipeline and can be composed of six or more parallel operations.
이때, 각각의 스케일 처리부(113)는 특정 사각형의 크기를 달리하여 필터링을 한다. 예를 들어, 각각의 스케일 처리부(113)는 9×9, 15×15, 21×21, 27×27, 39×39, 51×51 크기의 필터 크기로 각각 구현될 수 있다.At this time, each scale processing unit 113 filters by varying the size of a specific rectangle. For example, each scale processor 113 may be implemented with filter sizes of 9 × 9, 15 × 15, 21 × 21, 27 × 27, 39 × 39, and 51 × 51.
이러한 스케일 처리부(113) 각각은 제1 적분값 획득부(115) 및 헤이시안(Hessian) 계산부(117)를 포함한다.Each of the scale processor 113 includes a first integral value acquirer 115 and a Hessian calculator 117.
여기서, 제1 적분값 획득부(115)는 오프셋 버퍼(107)에 접근하여 획득한 오프셋 및 제1 적분 이미지 메모리(109)에 접근하여 획득한 차분값을 합산하여 헤이시안 계산부(117)에서 필요로 하는 적분 이미지의 화소값을 계산한다.Here, the first integrated value obtaining unit 115 adds the offset obtained by approaching the offset buffer 107 and the difference value obtained by accessing the first integrated image memory 109, and then, in the Hessian calculation unit 117. Calculate the pixel value of the required integrated image.
즉, 제1 적분값 획득부(115)는 헤이시안 계산부(117)에서 필요로 하는 적분 이미지를 구성하는 복수의 라인 중에서 해당하는 그룹에 인덱싱된 오프셋을 오프셋 버퍼(107)로부터 획득한다. 그리고 그룹의 다음 라인의 차분값을 제1 적분 이미지 메모리(109)로부터 획득한다. 이처럼 획득한 오프셋 및 차분값을 합산하여 적분 이미지 화소값을 계산하여 헤이시안 계산부(117)로 출력한다.That is, the first integrated value acquirer 115 obtains, from the offset buffer 107, an index indexed to a corresponding group among a plurality of lines constituting the integrated image required by the Hessian calculator 117. Then, the difference value of the next line of the group is obtained from the first integrated image memory 109. The offset and difference values thus obtained are added to calculate the integrated image pixel value and output to the Hessian calculator 117.
도 3 및 도 4를 다시 참조하면, 제1 적분값 획득부(115)는 제1 그룹(RG #0)의 오프셋 및 제1 그룹(RG #0)의 다음 라인(Row or Column N+1)의 차분값을 계산하여 적분 이미지 화소값을 연산할 수 있다. Referring to FIGS. 3 and 4 again, the first integrated value obtaining unit 115 may offset the first group RG # 0 and the next line Row or Column N + 1 of the first group RG # 0. The integrated image pixel value may be calculated by calculating the difference value of.
헤이시안 계산부(117)는 제1 적분값 획득부(115)로부터 전달받은 적분 이미지 화소값을 이용하여 박스 필터(Box filter) 연산을 수행하여 헤이시안 행렬식을 계산한다. The Hessian calculator 117 calculates a Hessian determinant by performing a box filter operation using the integrated image pixel value received from the first integral value obtainer 115.
특징점 추출부(119)는 헤이시안 계산부(117)가 계산한 헤이시안 행렬식을 이용하여 이미지 피라미드를 생성하여 특징점을 추출한다.The feature point extractor 119 extracts the feature points by generating an image pyramid using the Hessian determinant calculated by the Hessian calculator 117.
이때, 특징점 추출부(119)는 스케일 처리부(113)의 개수에 대응하여 복수개 형성되며, 도 5와 같이 구현될 수 있다.In this case, the feature point extractor 119 may be formed in plural in correspondence with the number of the scale processor 113, and may be implemented as shown in FIG. 5.
도 5를 참조하면, 스케일 처리부(113)가 총 6개라고 하면, 특징점 추출부(119)는 총 4개로 구현된다. 즉, 6개의 스케일 처리부(113)의 인덱스는 각각 S0, S1, S2, S3, S4, S5이고, 4개의 특징점 추출부(119)의 인덱스는 각각 F0, F1, F2, F3이다. Referring to FIG. 5, if the scale processing unit 113 has a total of six, the feature point extracting unit 119 is implemented as a total of four. That is, the indices of the six scale processing units 113 are S0, S1, S2, S3, S4, and S5, respectively, and the indices of the four feature point extracting units 119 are F0, F1, F2, and F3, respectively.
여기서, 스케일 처리부(113)가 6개면, 제1 적분값 획득부(115) 및 헤이시안 계산부(117) 역시 각각 6개로 구현됨은 자명하다.Here, if the scale processing unit 113 is six, it is obvious that the first integral value obtaining unit 115 and the Hessian calculating unit 117 are also implemented in six.
이때, F0은 S0, S1, S2와 연결되고, F1은 S0, S1, S3과 연결도며, F2는 S1, S3, S4와 연결되고, F3은 S2, S4, S5와 연결된다. 이처럼, 하나의 특징점 추출부(119)는 3개의 스케일 처리부(113)가 출력하는 특징점을 토대로 최종 특징점을 추출한다. At this time, F0 is connected to S0, S1, S2, F1 is connected to S0, S1, S3, F2 is connected to S1, S3, S4, and F3 is connected to S2, S4, S5. As such, one feature point extractor 119 extracts the final feature point based on the feature points output by the three scale processing units 113.
특징점 저장부(121)는 특징점 추출부(119)가 추출하는 특징점을 저장하며, FIFO(First In First Out)로 구현된다. The feature point storage unit 121 stores the feature points extracted by the feature point extractor 119 and is implemented as a FIFO (First In First Out).
제2 적분값 획득부(123)는 서술자 생성에 필요한 적분 이미지를 구성하는 복수의 라인 중에서 해당하는 그룹에 인덱싱된 오프셋을 오프셋 버퍼(107)로부터 획득한다. 그리고 그룹의 다음 라인의 차분값을 제2 적분 이미지 메모리(111)로부터 획득한다. 이처럼 획득한 오프셋 및 차분값을 합산하여 적분 이미지 화소값을 계산하여 회전 계산부(125) 및 서술자 계산부(127)에게 각각 출력한다.The second integral value obtaining unit 123 obtains, from the offset buffer 107, an offset indexed to a corresponding group among a plurality of lines constituting the integrated image required for the descriptor generation. The difference value of the next line of the group is obtained from the second integrated image memory 111. The offset and difference values thus obtained are added to calculate the integrated image pixel values and output to the rotation calculator 125 and the descriptor calculator 127, respectively.
회전 계산부(125)는 제2 적분값 획득부(123)가 획득한 적분 이미지 화소값을 토대로 회전값을 계산한다. 특징점 저장부(121)에서 추출된 특징점의 좌표 및 스케일에 기반하여 특징점을 중심으로 특정 영역의 적분값을 가지고 특징점의 주 방향을 계산한다. 이를 통해 현재 특징점이 어느 정도 회전되어 있는지 알 수 있게 되고 이는 서술자를 계산할 때 이용하는 적분값의 영역을 결정하는 기준으로 사용된다. The rotation calculation unit 125 calculates a rotation value based on the integrated image pixel value obtained by the second integration value acquisition unit 123. Based on the coordinates and scales of the feature points extracted by the feature point storage unit 121, the main direction of the feature points is calculated based on the integral value of the specific area. This shows how much of the current feature point is rotated, which is used as a criterion for determining the area of the integral value used when calculating the descriptor.
서술자 계산부(127)는 제2 적분값 획득부(123)가 획득한 적분 이미지 화소값을 토대로 서술자를 계산한다. 그리고 특징점 저장부(121)에 저장된 특징점에 서술자를 부여한다. 서술자는 특징점을 중심으로 스케일에 따른 특정 영역의 적분값을 사용하며 Harr-wavelet 필터링 후, dx, dy, |dx|, |dy|, 총 4개로 계산 및 표현된다. 또한, 서술자를 계산하기 위해 사용하는 적분값의 영역은 앞서 계산된 회전(각)에 의해 조정된다.The descriptor calculating unit 127 calculates the descriptor based on the integrated image pixel value obtained by the second integral value obtaining unit 123. The descriptor is assigned to the feature point stored in the feature point storage unit 121. Descriptor uses integral value of specific area according to scale around feature point, and after Harr-wavelet filtering, it is calculated and expressed in total of 4, dx, dy, | dx |, | dy |. Also, the area of integral value used to calculate the descriptor is adjusted by the rotation (angle) previously calculated.
서술자 저장부(129)는 서술자 계산부(127)가 계산한 서술자들이 저장된다. The descriptor storage unit 129 stores descriptors calculated by the descriptor calculator 127.
도 5는 본 발명의 실시예에 따른 적분 이미지 생성 및 획득 방법을 나타낸 순서도이다. 즉, 전술한 SURF 하드웨어 장치(100)의 적분 이미지 생성 및 획득 동작을 나타낸 순서도이다.5 is a flowchart illustrating a method of generating and obtaining an integrated image according to an exemplary embodiment of the present invention. That is, a flowchart illustrating an integrated image generation and acquisition operation of the SURF hardware device 100 described above.
도 5를 참조하면, 영상 페이딩부(103)가 입력받은 흑백 영상 전체에 대해 페이딩 기법을 적용하여 최대 비트 수를 기 정의된 비트로 감소시킨다(S101).Referring to FIG. 5, the image fading unit 103 reduces the maximum number of bits to a predetermined bit by applying a fading technique to the entire black-and-white image received (S101).
적분 이미지 생성부(105)는 페이딩된 흑백 영상의 하나 이상의 라인을 포함하는 그룹 단위로 화소값을 순차적으로 계산하여 그룹의 마지막 행 또는 마지막 열의 누적된 화소값을 해당 그룹의 오프셋(off-set)으로 산출(S103)하여 오프셋 버퍼(107)에 저장한다(S105). The integrated image generator 105 sequentially calculates pixel values in a group unit including one or more lines of a faded black and white image, and offsets the accumulated pixel values of the last row or the last column of the group. The calculation is performed (S103) and stored in the offset buffer 107 (S105).
또한, 적분 이미지 생성부(105)는 라인 별로 라인의 마지막 행 또는 마지막 열의 누적된 화소값을 해당 라인의 차분값(Defferential value)으로 산출(S107)하여 제1 적분 이미지 메모리(109) 및 제2 적분 이미지 메모리(121)에 각각 저장한다(S109).In addition, the integrated image generation unit 105 calculates (S107) the accumulated pixel value of the last row or the last column of the line for each line as a differential value of the corresponding line, and thereby the first integrated image memory 109 and the second. Each is stored in the integrated image memory 121 (S109).
다음, 제1 적분값 획득부(115)는 헤이시안 계산부(117)에서 필요로 하는 적분 이미지를 구성하는 복수의 라인 중에서 해당하는 그룹에 인덱싱된 오프셋을 오프셋 버퍼(107)로부터 획득한다(S111). 그리고 그룹의 다음 라인의 차분값을 제1 적분 이미지 메모리(109)로부터 획득한다(S113). Next, the first integral value obtaining unit 115 obtains, from the offset buffer 107, an offset indexed to a corresponding group among a plurality of lines constituting an integrated image required by the Hessian calculating unit 117 (S111). ). The difference value of the next line of the group is obtained from the first integrated image memory 109 (S113).
다음, 제1 적분값 획득부(115)는 S111 단계에서 획득한 오프셋 및 S113 단계에서 획득한 차분값을 합산하여 적분 이미지 화소값을 계산(S115)하여 헤이시안 계산부(117)로 출력한다.Next, the first integrated value obtainer 115 calculates an integrated image pixel value by adding the offset acquired in step S111 and the difference value obtained in step S113 (S115) and outputs the integrated image pixel value to the Hessian calculator 117.
또한, 제2 적분값 획득부(123)는 서술자 생성에 필요한 적분 이미지를 구성하는 복수의 라인 중에서 해당하는 그룹에 인덱싱된 오프셋을 오프셋 버퍼(107)로부터 획득한다(S117). 그리고 그룹의 다음 라인의 차분값을 제2 적분 이미지 메모리(111)로부터 획득한다(S119). In addition, the second integrated value obtaining unit 123 obtains, from the offset buffer 107, an offset indexed to a corresponding group among a plurality of lines constituting the integrated image required for the descriptor generation (S117). The difference value of the next line of the group is obtained from the second integrated image memory 111 (S119).
다음, 제2 적분값 획득부(123)는 S119 단계에서 획득한 오프셋 및 S121 단계에서 획득한 차분값을 합산하여 적분 이미지 화소값을 계산(S121)하여 회전 계산부(125) 및 서술자 계산부(127)에게 각각 출력한다.Next, the second integral value obtaining unit 123 calculates the integrated image pixel value by adding the offset obtained in step S119 and the difference value obtained in step S121 (S121), and then rotates the calculating unit 125 and the descriptor calculating unit ( 127) respectively.
이상에서 설명한 본 발명의 실시예는 장치 및 방법을 통해서만 구현이 되는 것은 아니며, 본 발명의 실시예의 구성에 대응하는 기능을 실현하는 프로그램 또는 그 프로그램이 기록된 기록 매체를 통해 구현될 수도 있다. The embodiments of the present invention described above are not only implemented through the apparatus and the method, but may be implemented through a program for realizing a function corresponding to the configuration of the embodiments of the present invention or a recording medium on which the program is recorded.
이상에서 본 발명의 실시예에 대하여 상세하게 설명하였지만 본 발명의 권리범위는 이에 한정되는 것은 아니고 다음의 청구범위에서 정의하고 있는 본 발명의 기본 개념을 이용한 당업자의 여러 변형 및 개량 형태 또한 본 발명의 권리범위에 속하는 것이다.Although the embodiments of the present invention have been described in detail above, the scope of the present invention is not limited thereto, and various modifications and improvements of those skilled in the art using the basic concepts of the present invention defined in the following claims are also provided. It belongs to the scope of rights.

Claims (8)

  1. 영상의 특징점 및 서술자를 연산하는 하드웨어 장치로서,A hardware device for calculating feature points and descriptors of an image,
    입력받은 흑백 영상 전체를 페이딩시켜 최대 비트 수를 기 정의된 비트로 감소시키는 영상 페이딩부,An image fading unit for fading the entire input black and white image to reduce the maximum number of bits to a predetermined bit;
    상기 영상 페이딩부로부터 출력되는 흑백 영상의 화소값을 순차적으로 계산하여 이전 라인까지 누적된 화소값을 오프셋으로 버퍼링하며, 다음 라인의 처음 화소부터 누적된 화소값을 차분값으로 저장하는 적분 이미지 생성부, 그리고An integrated image generator which sequentially calculates pixel values of the black and white image output from the image fading unit, buffers pixel values accumulated up to the previous line as offsets, and stores the pixel values accumulated from the first pixel of the next line as differential values. , And
    상기 오프셋 및 상기 차분값을 이용하여 상기 특징점을 추출하는데 필요한 제1 적분 이미지의 화소값을 산출하는 제1 적분값 획득부A first integrated value obtaining unit configured to calculate a pixel value of a first integrated image required to extract the feature point using the offset and the difference value
    를 포함하는 하드웨어 장치.Hardware device comprising a.
  2. 제1항에 있어서,The method of claim 1,
    상기 영상 페이딩부로부터 출력되는 흑백 영상의 이전 라인의 마지막 행 또는 마지막 열의 누적된 화소값인 오프셋이 저장되는 오프셋 버퍼, 그리고An offset buffer in which an offset which is an accumulated pixel value of a last row or a last column of a previous line of a monochrome image output from the image fading unit is stored; and
    상기 이전 라인의 다음 라인의 처음 화소부터 계산되어 상기 다음 라인의 마지막 행 또는 마지막 열의 누적된 화소값인 차분값이 저장되는 제1 적분 이미지 메모리를 더 포함하고,A first integrated image memory which is calculated from the first pixel of the next line of the previous line and stores a difference value which is a accumulated pixel value of the last row or last column of the next line,
    상기 제1 적분값 획득부는,The first integrated value obtaining unit,
    상기 오프셋 버퍼로 접근하여 상기 제1 적분 이미지에 해당하는 이전 라인의 오프셋을 획득하고, 상기 제1 적분 이미지 메모리로 접근하여 상기 이전 라인의 다음 라인의 차분값을 획득하며, 획득한 오프셋 및 획득한 차분값을 합산하여 상기 제1 적분 이미지의 화소값을 계산하는 하드웨어 장치.Access to the offset buffer to obtain an offset of a previous line corresponding to the first integrated image; access to the first integrated image memory to obtain a difference value of a next line of the previous line; And summing the difference values to calculate pixel values of the first integrated image.
  3. 제2항에 있어서,The method of claim 2,
    상기 오프셋 버퍼는,The offset buffer,
    하나 이상의 라인을 포함하는 그룹 단위로 상기 그룹 내 마지막 행 또는 마지막 열의 누적된 화소의 적분값인 오프셋이 상기 그룹 별로 인덱싱되어 저장되는 하드웨어 장치.And an offset, which is an integral value of accumulated pixels of the last row or the last column in the group, is indexed and stored for each group in a group unit including one or more lines.
  4. 제2항에 있어서,The method of claim 2,
    상기 제1 적분값 획득부는 스케일 단위로 복수개이고,The first integrated value acquisition unit is a plurality of scale units,
    상기 오프셋 버퍼 및 상기 제1 적분 이미지 메모리를 복수개의 상기 제1 적분값 획득부가 서로 공유하는 하드웨어 장치.And a plurality of the first integral value obtaining units share the offset buffer and the first integrated image memory.
  5. 제3항에 있어서,The method of claim 3,
    상기 이전 라인의 다음 라인의 처음 화소부터 계산되어 상기 다음 라인의 마지막 행 또는 마지막 열의 누적된 화소값인 차분값이 저장되는 제2 적분 이미지 메모리 그리고A second integrated image memory which stores a difference value which is calculated from the first pixel of the next line of the previous line and is a accumulated pixel value of the last row or last column of the next line;
    상기 오프셋 버퍼로 접근하여 상기 서술자를 생성하는데 필요한 제2 적분 이미지에 해당하는 오프셋을 획득하고, 상기 제2 적분 이미지 메모리로 접근하여 상기 오프셋 다음 라인의 차분값을 획득하며, 획득한 오프셋 및 획득한 차분값을 합산하여 상기 제2 적분 이미지의 화소값을 계산하는 하드웨어 장치.Access to the offset buffer to obtain an offset corresponding to the second integrated image required to generate the descriptor, access to the second integrated image memory to obtain a difference value of the line following the offset, and obtain the obtained offset and And summing the difference values to calculate pixel values of the second integrated image.
  6. 하드웨어 장치가 입력받은 영상의 특징점 및 서술자를 연산하기 위한 적분 이미지를 생성하는 방법으로서,A method for generating an integrated image for calculating a feature point and a descriptor of an input image by a hardware device,
    입력받은 흑백 영상 전체에 대해 최대 비트 수를 기 정의된 비트로 감소시키는 페이딩을 수행하는 단계,Performing fading to reduce the maximum number of bits to a predetermined bit on the entire black-and-white image received;
    상기 페이딩된 흑백 영상의 이전 라인의 마지막 행 또는 마지막 열의 누적된 화소값인 오프셋을 계산하는 단계, Calculating an offset which is a cumulative pixel value of a last row or a last column of a previous line of the faded monochrome image;
    상기 이전 라인의 다음 라인의 처음 화소부터 계산되어 누적된 상기 다음 라인의 마지막 행 또는 마지막 열의 화소값인 차분값을 계산하는 단계, 그리고Calculating a difference value that is a pixel value of the last row or the last column of the next line, which is calculated and accumulated from the first pixel of the next line of the previous line, and
    상기 오프셋 및 상기 차분값을 이용하여 상기 적분 이미지의 화소값을 산출하는 단계Calculating a pixel value of the integrated image using the offset and the difference value
    를 포함하는 적분 이미지 생성 방법.Integrated image generation method comprising a.
  7. 제6항에 있어서,The method of claim 6,
    상기 차분값을 계산하는 단계는,The step of calculating the difference value,
    상기 특징점을 추출하는데 필요한 제1 적분 이지미의 제1 차분값을 계산하여 저장하는 단계, 그리고Calculating and storing a first difference value of a first integral image required to extract the feature point, and
    상기 서술자를 추출하는데 필요한 제2 적분 이미지의 제2 차분값을 계산하여 저장하는 단계Calculating and storing a second difference value of a second integrated image required to extract the descriptor
    를 포함하는 적분 이미지 생성 방법.Integrated image generation method comprising a.
  8. 제7항에 있어서,The method of claim 7, wherein
    상기 산출하는 단계는,The calculating step,
    상기 오프셋 및 상기 제1 차분값을 합산하여 상기 제1 적분 이미지의 화소값을 산출하는 단계, 그리고Calculating the pixel value of the first integrated image by summing the offset and the first difference value, and
    상기 오프셋 및 상기 제2 차분값을 합산하여 상기 제2 적분 이미지의 화소값을 산출하는 단계Calculating a pixel value of the second integrated image by summing the offset and the second difference value;
    를 포함하는 적분 이미지 생성 방법.Integrated image generation method comprising a.
PCT/KR2013/003520 2013-04-24 2013-04-24 Hardware apparatus and method for generating integral image WO2014175480A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2013/003520 WO2014175480A1 (en) 2013-04-24 2013-04-24 Hardware apparatus and method for generating integral image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2013/003520 WO2014175480A1 (en) 2013-04-24 2013-04-24 Hardware apparatus and method for generating integral image

Publications (1)

Publication Number Publication Date
WO2014175480A1 true WO2014175480A1 (en) 2014-10-30

Family

ID=51792037

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2013/003520 WO2014175480A1 (en) 2013-04-24 2013-04-24 Hardware apparatus and method for generating integral image

Country Status (1)

Country Link
WO (1) WO2014175480A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007164772A (en) * 2005-12-14 2007-06-28 Mitsubishi Electric Research Laboratories Inc Method for constructing descriptors for set of data samples implemented by computer
JP2008210063A (en) * 2007-02-23 2008-09-11 Hiroshima Univ Image feature extraction apparatus, image retrieval system, video feature extraction apparatus, and query image retrieval system, and their methods, program, and computer readable recording medium
JP2009535680A (en) * 2006-04-28 2009-10-01 トヨタ モーター ヨーロッパ ナムローゼ フェンノートシャップ Robust interest point detector and descriptor
US20090310872A1 (en) * 2006-08-03 2009-12-17 Mitsubishi Denki Kabushiki Kaisha Sparse integral image descriptors with application to motion analysis
KR20110002043A (en) * 2008-04-23 2011-01-06 미쓰비시덴키 가부시키가이샤 Scale robust feature-based identifiers for image identification

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007164772A (en) * 2005-12-14 2007-06-28 Mitsubishi Electric Research Laboratories Inc Method for constructing descriptors for set of data samples implemented by computer
JP2009535680A (en) * 2006-04-28 2009-10-01 トヨタ モーター ヨーロッパ ナムローゼ フェンノートシャップ Robust interest point detector and descriptor
US20090310872A1 (en) * 2006-08-03 2009-12-17 Mitsubishi Denki Kabushiki Kaisha Sparse integral image descriptors with application to motion analysis
JP2008210063A (en) * 2007-02-23 2008-09-11 Hiroshima Univ Image feature extraction apparatus, image retrieval system, video feature extraction apparatus, and query image retrieval system, and their methods, program, and computer readable recording medium
KR20110002043A (en) * 2008-04-23 2011-01-06 미쓰비시덴키 가부시키가이샤 Scale robust feature-based identifiers for image identification

Similar Documents

Publication Publication Date Title
WO2018174623A1 (en) Apparatus and method for image analysis using virtual three-dimensional deep neural network
WO2019190139A1 (en) Device and method for convolution operation
WO2018113239A1 (en) Data scheduling method and system for convolutional neural network, and computer device
WO2013183918A1 (en) Image processing apparatus and method for three-dimensional (3d) image
WO2011126328A2 (en) Apparatus and method for removing noise generated from image sensor
WO2015182904A1 (en) Area of interest studying apparatus and method for detecting object of interest
WO2017222301A1 (en) Encoding apparatus and method, and decoding apparatus and method
WO2013118955A1 (en) Apparatus and method for depth map correction, and apparatus and method for stereoscopic image conversion using same
WO2015160052A1 (en) Method for correcting image from wide-angle lens and device therefor
WO2016186236A1 (en) Color processing system and method for three-dimensional object
WO2014010820A1 (en) Method and apparatus for estimating image motion using disparity information of a multi-view image
WO2010074386A1 (en) Method for detecting and correcting bad pixels in image sensor
WO2016148516A1 (en) Image compression method and image compression apparatus
WO2016114574A2 (en) Method and device for filtering texture, using patch shift
WO2019112084A1 (en) Method for removing compression distortion by using cnn
WO2014175480A1 (en) Hardware apparatus and method for generating integral image
WO2021101052A1 (en) Weakly supervised learning-based action frame detection method and device, using background frame suppression
WO2017003240A1 (en) Image conversion device and image conversion method therefor
WO2017213335A1 (en) Method for combining images in real time
WO2017086522A1 (en) Method for synthesizing chroma key image without requiring background screen
WO2015083857A1 (en) Surf hardware apparatus, and method for managing integral image
WO2019135625A1 (en) Image display device and control method therefor
WO2023033469A1 (en) Method for 3d-cropping medical image and device therefor
WO2019225799A1 (en) Method and device for deleting user information using deep learning generative model
WO2016006901A1 (en) Method and apparatus for extracting depth information from image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13883141

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13883141

Country of ref document: EP

Kind code of ref document: A1