WO2014175480A1 - Appareil matériel et méthode de production d'image intégrale - Google Patents

Appareil matériel et méthode de production d'image intégrale Download PDF

Info

Publication number
WO2014175480A1
WO2014175480A1 PCT/KR2013/003520 KR2013003520W WO2014175480A1 WO 2014175480 A1 WO2014175480 A1 WO 2014175480A1 KR 2013003520 W KR2013003520 W KR 2013003520W WO 2014175480 A1 WO2014175480 A1 WO 2014175480A1
Authority
WO
WIPO (PCT)
Prior art keywords
integrated image
image
offset
value
pixel
Prior art date
Application number
PCT/KR2013/003520
Other languages
English (en)
Korean (ko)
Inventor
최병호
김제우
이상설
황영배
장성준
김정호
Original Assignee
전자부품연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 전자부품연구원 filed Critical 전자부품연구원
Priority to PCT/KR2013/003520 priority Critical patent/WO2014175480A1/fr
Publication of WO2014175480A1 publication Critical patent/WO2014175480A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • G06V10/507Summing image-intensity values; Histogram projection analysis

Definitions

  • the present invention relates to a hardware device and a method for generating an integrated image, and more particularly, to the field of object, scene recognition and hardware system-on-chip technology.
  • SURF Speeded Up Robust Feature
  • the SURF algorithm regenerates the integrated image internally based on the input black and white image to perform object and scene recognition.
  • the maximum cumulative value also increases.
  • the number of bits required to express this increases, which increases the total memory usage.
  • a hardware device is a hardware device for calculating a feature point and a descriptor of an image, and includes: an image fading unit for fading the entire input black and white image to reduce the maximum number of bits to a predefined bit, the image fading unit An integrated image generator which sequentially calculates pixel values of the black and white image output from the buffer and buffers the pixel values accumulated up to the previous line as an offset, and stores the pixel values accumulated from the first pixel of the next line as a difference value, and the offset And a first integrated value obtaining unit configured to calculate pixel values of a first integrated image required to extract the feature points using the difference values.
  • the hardware device is the hardware device
  • An offset buffer in which an offset, which is an accumulated pixel value of a last row or a last column of a previous line, of the black and white image output from the image fading unit is stored, and the last row of the next line calculated from the first pixel of the next line of the previous line or
  • a first integrated image memory in which a difference value, which is a accumulated pixel value of a last column, is stored;
  • the first integrated value obtaining unit The first integrated value obtaining unit
  • the pixel values of the first integrated image may be calculated by adding the difference values.
  • An offset which is an integral value of accumulated pixels of the last row or the last column in the group, may be indexed and stored for each group in a group unit including one or more lines.
  • the first integrated value acquisition unit is a plurality of scale units
  • the plurality of first integration value acquirers may share the offset buffer and the first integrated image memory.
  • the hardware device is the hardware device
  • a second integrated image memory which is calculated from the first pixel of the next line of the previous line and stores the difference value, which is the accumulated pixel value of the last row or last column of the next line, and a second necessary to access the offset buffer to generate the descriptor.
  • Obtain an offset corresponding to a second integrated image access the second integrated image memory to obtain a difference value of a line following the offset, and add the obtained offset and the obtained difference value to sum the pixel value of the second integrated image; Can be calculated.
  • an integrated image generating method is a method of generating an integrated image for calculating a feature point and a descriptor of an input image by a hardware device, and the maximum number of bits for the entire input black and white image is defined as a predetermined bit.
  • Performing fading fading; calculating an offset which is a cumulative pixel value of a last row or a last column of a previous line of the faded monochrome image; the next line calculated and accumulated from the first pixel of a next line of the previous line Calculating a difference value that is a pixel value of a last row or a last column of, and calculating a pixel value of the integrated image by using the offset and the difference value.
  • Calculating the pixel value of the first integrated image by summing the offset and the first difference value, and calculating the pixel value of the second integrated image by summing the offset and the second difference value. can do.
  • the number of bits of the input image is reduced through a fading technique.
  • the pixel integrated value of the input image is stored as an offset by storing the accumulated value in the last column of one line separately, and when the integrated image of the new line is generated, the offset is accumulated again from the first pixel.
  • FIG. 1 is a block diagram of a SURF hardware device according to an embodiment of the present invention.
  • FIG. 2 is a configuration diagram of an offset buffer according to an embodiment of the present invention.
  • FIG. 3 is a block diagram of an integrated image memory according to an exemplary embodiment of the present invention.
  • FIG. 4 is a diagram illustrating an implementation of a scale processor and a feature point extractor according to an exemplary embodiment of the present invention.
  • FIG. 5 is a flowchart illustrating a method of generating and obtaining an integrated image according to an exemplary embodiment of the present invention.
  • ... unit means a unit for processing at least one function or operation, which may be implemented in hardware or software or a combination of hardware and software.
  • FIG. 1 is a block diagram of a speeded up robust feature (SURF) hardware device according to an embodiment of the present invention
  • Figure 2 is a block diagram of an offset buffer according to an embodiment of the present invention
  • Figure 3 is an embodiment of the present invention 4 is a block diagram of an integrated image memory according to an embodiment of the present invention
  • FIG. 4 is a diagram illustrating an implementation of a scale processor and a feature point extractor according to an exemplary embodiment of the present invention.
  • SURF speeded up robust feature
  • the SURF hardware device 100 generates a feature point and a descriptor.
  • image pyramids are generated to represent a scale space for extracting feature points, and feature points are extracted from the generated image pyramid.
  • the integral image used is the core of the SURF algorithm.
  • the SURF hardware apparatus 100 includes an image storage unit 101, an image fading unit 103, an integrated image generation unit 105, an offset buffer 107, a first integrated image memory 109, and a second integrated image memory ( 111, the scale processing unit 113, the first integrated value obtaining unit 115, the Hessian calculating unit 117, the feature point extracting unit 119, the feature point storing unit 121, and the second integrated value obtaining unit 123. , Rotation calculation unit 125, descriptor calculation unit 127, and descriptor storage unit 129.
  • the left and right sides are generated as two large blocks with the feature point storage unit 121 as a boundary, and the left side is divided into the feature point extraction unit and the right side the descriptor generation unit.
  • the image storage unit 101 stores the input black and white image.
  • the image fading unit 103 fades the entire black-and-white image stored in the image storage unit 101 to reduce the maximum number of bits to a predetermined bit.
  • the analog input is quantized from 0 to 255 to use digital values.
  • the fading technique is to reduce the number of 256 quantization levels.
  • the contrast or color of the image may change, but the image itself does not change significantly. Since object recognition is not based on the contrast or the color of the image, the recognition rate has little effect on the number of quantization levels.
  • the representation bits are reduced from 8 bits to 6 bits, and the use of integrated image memory is reduced by about 8% in 640 ⁇ 480 resolution images.
  • the integrated image generator 105 sequentially calculates and accumulates pixel values of the pixels constituting the faded monochrome image received from the image fading unit 103 in the line direction.
  • the line may include a plurality of pixels arranged in units of rows of an image or a plurality of pixels arranged in units of columns of an image.
  • the integrated image generation unit 105 sequentially calculates and accumulates pixel values in group units including one or more lines. At this time, the accumulated pixel value of the last row or the last column of the group is calculated as an offset of the group and stored in the offset buffer 107.
  • the integrated image generator 105 calculates a differential value based on the offset and stores the differential value in the first integrated image memory 109 and the second integrated image memory 121, respectively.
  • the integrated image generating unit 105 newly calculates and accumulates the difference value starting from the pixel value of the first row or the first column of the next line of the previous line regardless of the accumulated pixel value of the previous line.
  • the offset buffer 107 indexes and stores the offset calculated by the integrated image generator 105 for each group.
  • the offset buffer 107 is implemented as a register, a collision does not occur even when the feature extraction unit and the descriptor generation unit simultaneously access.
  • the offset buffer 107 may be implemented as shown in FIG. 3.
  • the integrated image generation unit 105 may include an offset P1 buffered in group units including N lines (Row or Column # 0 to Row or Column #N). As shown in (b), they are stored in the offset buffer 107 for each group.
  • the first group RG # 0 includes N lines (Row or Column # 0 to Row or Column #N), and the offset of the first group RG # 0 is the Nth line (Row or Column #).
  • the accumulated pixel value of the last row or last column of N) is stored.
  • the offset of the second group RG # 1 stores the accumulated pixel value of the last row or the last column of the 2N th line (Row or Column # 2N).
  • the representation bit of the pixel value of the integrated image is reduced from the conventional 27 bits to 19 bits. This can reduce the usage of integrated image memory by about 30%.
  • the first integrated image memory 109 stores the difference value of the integrated image required for feature point extraction.
  • the second integrated image memory 111 stores the difference value of the integrated image required for the descriptor extraction.
  • first integral value obtaining unit 115 and the second integral value obtaining unit 123 are interfaced with a FIFO, the first integral value obtaining unit 115 and the second integral value obtaining unit 123 mutually interact with each other. Can work independently
  • the first integrated image memory 109 is a dedicated memory for feature point extraction.
  • the plurality of scale processing units 113 are memories shared with each other.
  • the first integrated image memory 109 filters the pixel values of the integrated image in a specific rectangular area within the faded input black and white image, but the size of the specific rectangular area is not large.
  • the specific rectangular area may be a rectangular box having a size of 51 ⁇ 51 or more.
  • the second integrated image memory 111 is a dedicated memory for descriptor extraction.
  • the second integrated image memory 111 stores all the pixel values of the integrated image of the faded input black and white image frame. At this time, the integrated image of a considerably larger area is stored around the feature point.
  • first integrated image memory 109 and the second integrated image memory 111 may be implemented as shown in FIG. 4.
  • differential values are indexed and stored for each entire line (Row or Column # 0 to Row or Column #N) of the image. That is, the difference value is a cumulative pixel value of the last row or the last column that is sequentially calculated from the pixel values of the first row or the first column (0,0) of the line corresponding to each line.
  • the scale processor 113 performs parallel operations on a plurality of scales at the time of feature point extraction.
  • the scale is a filtering operation for a specific rectangular region of the first integrated image memory 109 described above, and may include, for example, six scales. Therefore, the scale processor 113 may be implemented in plural as the number of scales. It is implemented as a pipeline and can be composed of six or more parallel operations.
  • each scale processing unit 113 filters by varying the size of a specific rectangle.
  • each scale processor 113 may be implemented with filter sizes of 9 ⁇ 9, 15 ⁇ 15, 21 ⁇ 21, 27 ⁇ 27, 39 ⁇ 39, and 51 ⁇ 51.
  • Each of the scale processor 113 includes a first integral value acquirer 115 and a Hessian calculator 117.
  • the first integrated value obtaining unit 115 adds the offset obtained by approaching the offset buffer 107 and the difference value obtained by accessing the first integrated image memory 109, and then, in the Hessian calculation unit 117. Calculate the pixel value of the required integrated image.
  • the first integrated value acquirer 115 obtains, from the offset buffer 107, an index indexed to a corresponding group among a plurality of lines constituting the integrated image required by the Hessian calculator 117. Then, the difference value of the next line of the group is obtained from the first integrated image memory 109. The offset and difference values thus obtained are added to calculate the integrated image pixel value and output to the Hessian calculator 117.
  • the first integrated value obtaining unit 115 may offset the first group RG # 0 and the next line Row or Column N + 1 of the first group RG # 0.
  • the integrated image pixel value may be calculated by calculating the difference value of.
  • the Hessian calculator 117 calculates a Hessian determinant by performing a box filter operation using the integrated image pixel value received from the first integral value obtainer 115.
  • the feature point extractor 119 extracts the feature points by generating an image pyramid using the Hessian determinant calculated by the Hessian calculator 117.
  • the feature point extractor 119 may be formed in plural in correspondence with the number of the scale processor 113, and may be implemented as shown in FIG. 5.
  • the feature point extracting unit 119 is implemented as a total of four. That is, the indices of the six scale processing units 113 are S0, S1, S2, S3, S4, and S5, respectively, and the indices of the four feature point extracting units 119 are F0, F1, F2, and F3, respectively.
  • the scale processing unit 113 is six, it is obvious that the first integral value obtaining unit 115 and the Hessian calculating unit 117 are also implemented in six.
  • one feature point extractor 119 extracts the final feature point based on the feature points output by the three scale processing units 113.
  • the feature point storage unit 121 stores the feature points extracted by the feature point extractor 119 and is implemented as a FIFO (First In First Out).
  • the second integral value obtaining unit 123 obtains, from the offset buffer 107, an offset indexed to a corresponding group among a plurality of lines constituting the integrated image required for the descriptor generation.
  • the difference value of the next line of the group is obtained from the second integrated image memory 111.
  • the offset and difference values thus obtained are added to calculate the integrated image pixel values and output to the rotation calculator 125 and the descriptor calculator 127, respectively.
  • the rotation calculation unit 125 calculates a rotation value based on the integrated image pixel value obtained by the second integration value acquisition unit 123. Based on the coordinates and scales of the feature points extracted by the feature point storage unit 121, the main direction of the feature points is calculated based on the integral value of the specific area. This shows how much of the current feature point is rotated, which is used as a criterion for determining the area of the integral value used when calculating the descriptor.
  • the descriptor calculating unit 127 calculates the descriptor based on the integrated image pixel value obtained by the second integral value obtaining unit 123.
  • the descriptor is assigned to the feature point stored in the feature point storage unit 121.
  • Descriptor uses integral value of specific area according to scale around feature point, and after Harr-wavelet filtering, it is calculated and expressed in total of 4, dx, dy,
  • the descriptor storage unit 129 stores descriptors calculated by the descriptor calculator 127.
  • FIG. 5 is a flowchart illustrating a method of generating and obtaining an integrated image according to an exemplary embodiment of the present invention. That is, a flowchart illustrating an integrated image generation and acquisition operation of the SURF hardware device 100 described above.
  • the image fading unit 103 reduces the maximum number of bits to a predetermined bit by applying a fading technique to the entire black-and-white image received (S101).
  • the integrated image generator 105 sequentially calculates pixel values in a group unit including one or more lines of a faded black and white image, and offsets the accumulated pixel values of the last row or the last column of the group. The calculation is performed (S103) and stored in the offset buffer 107 (S105).
  • the integrated image generation unit 105 calculates (S107) the accumulated pixel value of the last row or the last column of the line for each line as a differential value of the corresponding line, and thereby the first integrated image memory 109 and the second. Each is stored in the integrated image memory 121 (S109).
  • the first integral value obtaining unit 115 obtains, from the offset buffer 107, an offset indexed to a corresponding group among a plurality of lines constituting an integrated image required by the Hessian calculating unit 117 (S111). ). The difference value of the next line of the group is obtained from the first integrated image memory 109 (S113).
  • the first integrated value obtainer 115 calculates an integrated image pixel value by adding the offset acquired in step S111 and the difference value obtained in step S113 (S115) and outputs the integrated image pixel value to the Hessian calculator 117.
  • the second integrated value obtaining unit 123 obtains, from the offset buffer 107, an offset indexed to a corresponding group among a plurality of lines constituting the integrated image required for the descriptor generation (S117).
  • the difference value of the next line of the group is obtained from the second integrated image memory 111 (S119).
  • the second integral value obtaining unit 123 calculates the integrated image pixel value by adding the offset obtained in step S119 and the difference value obtained in step S121 (S121), and then rotates the calculating unit 125 and the descriptor calculating unit ( 127) respectively.
  • the embodiments of the present invention described above are not only implemented through the apparatus and the method, but may be implemented through a program for realizing a function corresponding to the configuration of the embodiments of the present invention or a recording medium on which the program is recorded.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un appareil matériel et une méthode de production d'une image intégrale. En cela, un appareil matériel de calcul des caractéristiques et d'un descripteur d'une image comprend : une unité d'atténuation d'image permettant d'effectuer l'atténuation de la totalité d'une image en noir et blanc reçue et de réduire le nombre maximum de bits à un nombre de bits défini précédemment ; une unité de production d'image intégrale permettant de calculer séquentiellement les valeurs de pixel de l'image en noir et blanc fournie en sortie par l'unité d'atténuation d'image, et de mettre en tampon, avec un décalage, les valeurs de pixel accumulées jusqu'à la ligne précédente, puis stocker, en tant que valeur différentielle, les valeurs de pixel accumulées en commençant par le premier pixel de la ligne suivante ; et une unité d'acquisition de première valeur intégrale permettant de calculer les valeurs de pixel d'une première image intégrale nécessaire pour extraire les caractéristiques grâce au décalage et à la valeur différentielle.
PCT/KR2013/003520 2013-04-24 2013-04-24 Appareil matériel et méthode de production d'image intégrale WO2014175480A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/KR2013/003520 WO2014175480A1 (fr) 2013-04-24 2013-04-24 Appareil matériel et méthode de production d'image intégrale

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2013/003520 WO2014175480A1 (fr) 2013-04-24 2013-04-24 Appareil matériel et méthode de production d'image intégrale

Publications (1)

Publication Number Publication Date
WO2014175480A1 true WO2014175480A1 (fr) 2014-10-30

Family

ID=51792037

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2013/003520 WO2014175480A1 (fr) 2013-04-24 2013-04-24 Appareil matériel et méthode de production d'image intégrale

Country Status (1)

Country Link
WO (1) WO2014175480A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007164772A (ja) * 2005-12-14 2007-06-28 Mitsubishi Electric Research Laboratories Inc コンピュータにより実施される、データサンプルのセットについて記述子を作成する方法
JP2008210063A (ja) * 2007-02-23 2008-09-11 Hiroshima Univ 画像特徴抽出装置、画像検索システム、映像特徴抽出装置、及び質問画像検索システム、ならびに、これらの方法、プログラム、コンピュータ読み取り可能な記録媒体
JP2009535680A (ja) * 2006-04-28 2009-10-01 トヨタ モーター ヨーロッパ ナムローゼ フェンノートシャップ ロバスト(robust)関心点検出器および記述子
US20090310872A1 (en) * 2006-08-03 2009-12-17 Mitsubishi Denki Kabushiki Kaisha Sparse integral image descriptors with application to motion analysis
KR20110002043A (ko) * 2008-04-23 2011-01-06 미쓰비시덴키 가부시키가이샤 이미지 식별을 위한 스케일 안정적 특징-기반 식별자

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007164772A (ja) * 2005-12-14 2007-06-28 Mitsubishi Electric Research Laboratories Inc コンピュータにより実施される、データサンプルのセットについて記述子を作成する方法
JP2009535680A (ja) * 2006-04-28 2009-10-01 トヨタ モーター ヨーロッパ ナムローゼ フェンノートシャップ ロバスト(robust)関心点検出器および記述子
US20090310872A1 (en) * 2006-08-03 2009-12-17 Mitsubishi Denki Kabushiki Kaisha Sparse integral image descriptors with application to motion analysis
JP2008210063A (ja) * 2007-02-23 2008-09-11 Hiroshima Univ 画像特徴抽出装置、画像検索システム、映像特徴抽出装置、及び質問画像検索システム、ならびに、これらの方法、プログラム、コンピュータ読み取り可能な記録媒体
KR20110002043A (ko) * 2008-04-23 2011-01-06 미쓰비시덴키 가부시키가이샤 이미지 식별을 위한 스케일 안정적 특징-기반 식별자

Similar Documents

Publication Publication Date Title
WO2018174623A1 (fr) Appareil et procédé d'analyse d'images utilisant un réseau neuronal profond tridimensionnel virtuel
WO2019190139A1 (fr) Dispositif et procédé d'opération de convolution
WO2015182904A1 (fr) Appareil d'étude de zone d'intérêt et procédé de détection d'objet d'intérêt
WO2011126328A2 (fr) Appareil et procédé d'élimination de bruit généré à partir d'un capteur d'image
WO2013118955A1 (fr) Appareil et procédé de correction de carte de profondeur, et appareil et procédé de conversion d'image stéréoscopique les utilisant
WO2015160052A1 (fr) Procédé de correction d'image d'un objectif à grand angle et dispositif associé
WO2016186236A1 (fr) Système et procédé de traitement de couleur pour objet tridimensionnel
EP3714425A1 (fr) Procédé et appareil de reconstruction d'images originales à partir d'images modifiées
WO2014010820A1 (fr) Procédé et appareil d'estimation de mouvement d'image à l'aide d'informations de disparité d'une image multivue
WO2010074386A1 (fr) Procédé de détection et de correction de pixels corrompus dans un capteur d'images
CN108540689B (zh) 图像信号处理器、应用处理器及移动装置
WO2016148516A1 (fr) Procédé de compression d'image et appareil de compression d'image
WO2019112084A1 (fr) Procédé d'élimination de distorsion de compression à l'aide d'un cnn
WO2014175480A1 (fr) Appareil matériel et méthode de production d'image intégrale
WO2017003240A1 (fr) Dispositif de conversion d'image et procédé de conversion d'image associé
WO2017086522A1 (fr) Procédé de synthèse d'image d'incrustation couleur sans écran d'arrière-plan
WO2013077521A1 (fr) Appareil de prétraitement dans un système d'adaptation stéréo
WO2023210884A1 (fr) Dispositif et procédé d'élimination de bruit basés sur de moyens non locaux
WO2019009579A1 (fr) Procédé et appareil de correspondance stéréo utilisant une interpolation à points de support
WO2022080680A1 (fr) Procédé et dispositif de retouche d'image basés sur une intelligence artificielle
WO2016021829A1 (fr) Procédé et de reconnaissance de mouvement et dispositif de reconnaissance de mouvement
WO2019225799A1 (fr) Procédé et dispositif de suppression d'informations d'utilisateur à l'aide d'un modèle génératif d'apprentissage profond
WO2015102352A1 (fr) Procédé et appareil de gestion de mémoire
WO2016006901A1 (fr) Procédé et appareil d'extraction d'informations de profondeur d'une image
WO2022045519A1 (fr) Dispositif et procédé de sélection de modèle d'optimisation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13883141

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13883141

Country of ref document: EP

Kind code of ref document: A1