JP2011077958A - Dimensional compression device, dimensional compression method, and dimensional compression program - Google Patents

Dimensional compression device, dimensional compression method, and dimensional compression program Download PDF

Info

Publication number
JP2011077958A
JP2011077958A JP2009229087A JP2009229087A JP2011077958A JP 2011077958 A JP2011077958 A JP 2011077958A JP 2009229087 A JP2009229087 A JP 2009229087A JP 2009229087 A JP2009229087 A JP 2009229087A JP 2011077958 A JP2011077958 A JP 2011077958A
Authority
JP
Japan
Prior art keywords
vector
dimensional compression
white noise
pseudo white
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
JP2009229087A
Other languages
Japanese (ja)
Inventor
Tomoya Sakai
智弥 酒井
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chiba University NUC
Original Assignee
Chiba University NUC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chiba University NUC filed Critical Chiba University NUC
Priority to JP2009229087A priority Critical patent/JP2011077958A/en
Publication of JP2011077958A publication Critical patent/JP2011077958A/en
Withdrawn legal-status Critical Current

Links

Landscapes

  • Compression, Expansion, Code Conversion, And Decoders (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To provide a dimensional compression device, a dimensional compression method and a dimensional compression program for reducing calculation costs, and for suppressing an increase in calculation time while maintaining high accuracy of approximation calculation. <P>SOLUTION: The dimensional compression device that employs cyclic matrix or inverse cyclic matrix of random series as a random matrix is provided with: a pseudo white noise vector processing part; a data sample vector processing part; and a multiplication part. In this case, the pseudo white noise vector processing part creates a pseudo white noise vector n, and performs pre-processing on the pseudo white noise vector n to create a vector r, and performs high speed Fourier transformation on the vector r to create a vector s, preferably. <P>COPYRIGHT: (C)2011,JPO&INPIT

Description

本発明は、次元圧縮装置、次元圧縮方法及び次元圧縮プログラムに関する。   The present invention relates to a dimension compression apparatus, a dimension compression method, and a dimension compression program.

一般的な大規模データベースの多くは、大量のサンプル(例えば文書や画像)を蓄積したものであり、類似したサンプルの検索や分類のため、属性と呼ばれる数値(例えば文書の単語の出願頻度や画像の特徴量・画素値及び付加情報)の集合で各サンプルを表すことが多い。属性を成分とするベクトルで各サンプルを表し、ベクトルの内積やノルム等を利用した類似度計算や統計的な処理が必要とされる。   Many general large-scale databases accumulate a large number of samples (for example, documents and images). In order to search and classify similar samples, numerical values called attributes (for example, the application frequency and images of document words) In many cases, each sample is represented by a set of feature amounts, pixel values, and additional information. Each sample is represented by a vector having an attribute as a component, and similarity calculation using a vector inner product, norm or the like, or statistical processing is required.

しかしながら、実用上このベクトルの次元数(例えば辞書の単語数、画像の画素数等)は非常に高く、高次元に起因する検索効率等の低下は次元の呪いとして知られている。そこで、次元圧縮又は次元削減(本明細書では単に「次元圧縮」と表現する。)と呼ばれる技術による低次元化によって計算の効率化が図られている。   However, in practice, the number of dimensions of this vector (for example, the number of words in a dictionary, the number of pixels in an image, etc.) is very high, and a decrease in search efficiency due to high dimensions is known as a curse of dimensions. Therefore, calculation efficiency is improved by reducing the dimension by a technique called dimension compression or dimension reduction (simply expressed as “dimensional compression” in this specification).

ところで上記次元圧縮に関する技術の一つとしてランダム射影が知られている。ここで「ランダム射影」とは、ランダムな要素を持つ行列によって高次元ベクトルを低次元ベクトルに線形射影することをいう。公知のランダム射影に関する技術は、例えば下記非特許文献1、2に記載されている。   By the way, random projection is known as one of the techniques related to the dimensional compression. Here, “random projection” refers to linear projection of a high-dimensional vector to a low-dimensional vector using a matrix having random elements. Known techniques relating to random projection are described, for example, in Non-Patent Documents 1 and 2 below.

S.S.Vempala,The Random Projection Method,Volume65 of Series in Discrete Mathematics and Theoretical Computer Science,American Mathematical Society,2004S. S. Vempala, The Random Projection Method, Volume 65 of Series in Discrete Materials and Theoretical Computer Science, American Mathematical Society 4 D.Achlioptas,Database−friendly random projections: Johnson−Lindenstrauss with binary coins,Journal of Computer and System Sciences,66:671−687,2003D. Achryoptas, Database-friendly random projections: Johnson-Lindenstrauss with binary coins, Journal of Computer and System Sciences, 661: 681-681. 渡邉達也、瀧本英二、丸岡章,ランダムプロジェクションによる次元圧縮,信学技報COMP2001−92,pp.73−79,電子情報通信学会,2002Tatsuya Watanabe, Eiji Enomoto, Akira Maruoka, Dimensional Compression by Random Projection, IEICE Technical Report COMP2001-92, pp. 73-79, IEICE, 2002 大内浩仁、三浦孝夫、塩谷勇,ランダムプロジェクションを用いたニュースストリームの検索,日本データベース学会論文誌(DBSJ Letters),Vol.3,No.3,pp.1−4,2004Hirohito Ouchi, Takao Miura, Isamu Shiotani, Searching News Streams Using Random Projection, Journal of the Database Society of Japan (DBSJ Letters), Vol. 3, No. 3, pp. 1-4, 2004

しかしながら、上記非特許文献により例示されるランダム射影では、射影前後の次元の積のサイズのランダム行列を生成して記憶する必要があり、ランダム行列によるベクトル射影に必要な計算コストが大きくなってしまうといった課題がある。例えば100万次元、8000次元の場合、単精度でも約32GBの記憶領域を要し、その記憶領域は非現実的である。また、ランダム射影に必要な計算時間も射影前後の次元数の積に比例し、近似計算の精度を高めるために射影後の次元数を高くすると計算時間の増加を招いてしまう。   However, in the random projection exemplified by the above-mentioned non-patent document, it is necessary to generate and store a random matrix having the size of the product of dimensions before and after the projection, which increases the calculation cost necessary for vector projection using the random matrix. There is a problem. For example, in the case of 1 million dimensions and 8000 dimensions, a storage area of about 32 GB is required even in single precision, and the storage area is unrealistic. In addition, the calculation time required for the random projection is proportional to the product of the number of dimensions before and after the projection, and if the number of dimensions after the projection is increased in order to improve the accuracy of the approximate calculation, the calculation time is increased.

そこで、本発明は、上記課題を鑑み、高い近似計算の精度を維持しつつ、より計算コストが小さく、計算時間の増加を抑えた次元圧縮装置、次元圧縮方法及び次元圧縮プログラムを提供することを目的とする。   Therefore, in view of the above problems, the present invention provides a dimension compression apparatus, a dimension compression method, and a dimension compression program that can reduce calculation cost and suppress increase in calculation time while maintaining high accuracy of approximate calculation. Objective.

上記課題について本発明者らが鋭意検討を行ったところ、ランダム系列の巡回行列又は逆巡回行列をランダム行列として採用することで上記課題を解決することができる点に想到し、本発明を完成させるに至った。   As a result of intensive studies by the present inventors on the above-mentioned problem, the inventors have conceived that the above-mentioned problem can be solved by adopting a random sequence circulant matrix or inverse circulant matrix as a random matrix, thereby completing the present invention. It came to.

即ち、本発明の一観点に係る次元圧縮装置は、ランダム系列の巡回行列又は逆巡回行列をランダム行列として採用する。   That is, the dimension compression apparatus according to one aspect of the present invention employs a random sequence cyclic matrix or inverse cyclic matrix as a random matrix.

また本発明のための一観点に係る次元圧縮装置は、擬似白色雑音ベクトル処理部と、データサンプルベクトル処理部と、乗算部と、を有する。   A dimension compression apparatus according to an aspect of the present invention includes a pseudo white noise vector processing unit, a data sample vector processing unit, and a multiplication unit.

以上本発明は、高い近似計算の精度を維持しつつ、より計算コストが小さく、計算時間の増加を抑えた次元圧縮装置、次元圧縮方法及び次元圧縮プログラムとなる。   As described above, the present invention provides a dimensional compression apparatus, a dimensional compression method, and a dimensional compression program that maintain a high accuracy of approximate calculation, have a lower calculation cost, and suppress an increase in calculation time.

実施形態に係る機能ブロックを示す図である。It is a figure which shows the functional block which concerns on embodiment. 擬似白色雑音処理部における処理のフローを示す図である。It is a figure which shows the flow of a process in a pseudo | simulation white noise process part. 擬似処理雑音処理部における前処理のフローを示す図である。It is a figure which shows the flow of the pre-processing in a pseudo process noise process part. データサンプルベクトル処理のフローを示す図である。It is a figure which shows the flow of a data sample vector process. 乗算部4の処理について示す図である。It is a figure shown about the process of the multiplication part. ベクトルr、巡回行列R、ベクトルb、対角行列Bを示す図である。It is a figure which shows the vector r, the cyclic matrix R, the vector b, and the diagonal matrix B.

以下、本発明の実施形態について図面を参照しつつ説明する。ただし、本発明は多くの異なる態様で実施することが可能であり、以下に示す実施形態に限定されるものではない。   Hereinafter, embodiments of the present invention will be described with reference to the drawings. However, the present invention can be implemented in many different modes and is not limited to the embodiments shown below.

(実施形態1)
図1は、本実施形態に係る次元圧縮装置の機能ブロックを示す図である。本図で示すように本実施形態に係る次元圧縮装置1は、擬似白色雑音ベクトル処理部2と、データサンプルベクトル処理部3と、乗算部4と、を有する。本実施形態に係る次元圧縮装置1は、様々な形で実現可能であり、例えばいわゆるコンピュータのハードディスク等の記録媒体に実行されることで上記各部として機能させるためのコンピュータプログラムを格納させ、これを実行させることで実現することができる。また特に、インターネット上のサーバーの記録媒体にこのプログラムを格納させ、実行することでも実現できる。
(Embodiment 1)
FIG. 1 is a diagram showing functional blocks of the dimension compression apparatus according to the present embodiment. As shown in the figure, the dimension compression apparatus 1 according to the present embodiment includes a pseudo white noise vector processing unit 2, a data sample vector processing unit 3, and a multiplication unit 4. The dimension compression apparatus 1 according to the present embodiment can be realized in various forms. For example, the dimensional compression apparatus 1 stores a computer program that is executed on a recording medium such as a so-called computer hard disk to function as each of the above-described units. It can be realized by executing. In particular, the program can be stored in a recording medium of a server on the Internet and executed.

擬似白色雑音ベクトル処理部2は、擬似白色雑音ベクトルの作成及び処理を行なうための部であって、限定されるわけではないが、具体的にはD次元(Dは正数)の疑似白色雑音ベクトルnを作成し(S21)、この疑似白色雑音ベクトルnに対し前処理を行いベクトルrを作成し(S22)、このベクトルrに高速フーリエ変換を施し(S23)ベクトルsを作成する。ここで「擬似白色雑音ベクトル」とは、成分の期待値と相互相関がゼロで、自己相関が定数であるベクトルをいう。また、この高速フーリエ変換が施された後のベクトルsは、ハードディスク等の記憶領域に記録される。なお、本実施形態に係る擬似白色雑音ベクトル処理部2においては、広く実用されている高速フーリエ変換を用いているが、
整数演算のため専用演算装置をシンプルに実装できる高速ウォルシュ・アダマール変換を利用することも可能である。また、上記いずれの変換において、高速でない場合であっても、計算時間が増大するものの巨大な行列を確保することなくランダム射影できるといった利点が残る。図2に、擬似白色雑音ベクトル処理部の処理のフローについて示す。
The pseudo white noise vector processing unit 2 is a unit for creating and processing a pseudo white noise vector, and is not limited, but specifically, a D-dimensional (D is a positive number) pseudo white noise. A vector n is created (S21), the pseudo white noise vector n is preprocessed to create a vector r (S22), and the vector r is subjected to fast Fourier transform (S23) to create a vector s. Here, the “pseudo white noise vector” refers to a vector in which the cross-correlation with the expected value of the component is zero and the autocorrelation is a constant. The vector s after the fast Fourier transform is recorded in a storage area such as a hard disk. The pseudo white noise vector processing unit 2 according to the present embodiment uses a fast Fourier transform that is widely used.
It is also possible to use a high-speed Walsh-Hadamard transform that can simply implement a dedicated arithmetic unit for integer arithmetic. In any of the above conversions, even if the speed is not high, the calculation time increases, but the advantage that random projection can be performed without securing a huge matrix remains. FIG. 2 shows a processing flow of the pseudo white noise vector processing unit.

ここで前処理(S22)とは、本実施形態で圧縮したベクトルを用いた近似計算の精度を改善するために擬似白色雑音ベクトルに施す事前の処理であって、本発明の適用対象に応じて省略可能であるが、一般的には正規化処理を含むことが好ましく、より好ましくは、例えば図3で示すように、平均値除去処理(S221)、第一の正規化処理(S222)、バイアス処理(S223)、第二の正規化処理(S224)を行うことが好ましい。   Here, the pre-processing (S22) is a pre-processing applied to the pseudo white noise vector in order to improve the accuracy of the approximate calculation using the vector compressed in the present embodiment, and depends on the application target of the present invention. Although it can be omitted, it is generally preferable to include a normalization process, and more preferably, as shown in FIG. 3, for example, the average value removal process (S221), the first normalization process (S222), the bias It is preferable to perform the process (S223) and the second normalization process (S224).

ここで平均値除去処理(S221)は、ベクトルの成分の平均値が0となるよう各成分から平均値を除去する処理をいう。また第一の正規化処理(S222)は、ベクトルの二乗ノルムが1になるように行う正規化処理をいう。またバイアス処理(S223)は、ベクトルの各成分にDの平方根の逆数を加算する処理をいう。そして第二の正規化処理(S224)は、ベクトルの二乗ノルムがD/Kとなるように行う正規化処理をいう。なおここでKは、射影後の次元数をいう。   Here, the average value removal process (S221) refers to a process of removing the average value from each component so that the average value of the vector components becomes zero. The first normalization process (S222) is a normalization process performed so that the square norm of the vector becomes 1. The bias process (S223) refers to a process of adding the reciprocal of the square root of D to each component of the vector. The second normalization process (S224) is a normalization process performed so that the square norm of the vector becomes D / K. Here, K is the number of dimensions after projection.

また、本実施形態においてデータサンプルベクトル処理部3は、データサンプルに対して処理を行うための部であり、データサンプルを表す入力ベクトルxに対し、拡散符号bとの乗算処理(S31)を行なった後、高速フーリエ変換を行い(S32)、更に複素共役処理を行い(S33)、ベクトルzを作成する。なおここで「拡散符号」とは、データサンプルを表す入力ベクトルxと同じ長さのベクトルであって、±1の乱数を要素とするものをいう。図4に、データサンプルベクトル処理部3の処理について示す。   In the present embodiment, the data sample vector processing unit 3 is a unit for performing processing on the data sample, and performs a multiplication process (S31) with the spread code b on the input vector x representing the data sample. After that, fast Fourier transform is performed (S32), further complex conjugate processing is performed (S33), and a vector z is created. Here, the “spreading code” is a vector having the same length as the input vector x representing the data sample and having a random number of ± 1 as an element. FIG. 4 shows the processing of the data sample vector processing unit 3.

また本実施形態において乗算部4は、上記擬似白色雑音ベクトル部2が作成したベクトルsと、上記データサンプル処理部3が作成したベクトルzを成分ごとに乗算し(S41)、高速逆フーリエ変換を行い(S42)、出力ベクトルyを作成する。そして、このベクトルyから任意にK個の成分を抽出すれば、ベクトルxをランダム射影したK次元ベクトルを得ることができる。図5に、乗算部4の処理について示す。   In this embodiment, the multiplication unit 4 multiplies the vector s created by the pseudo white noise vector unit 2 and the vector z created by the data sample processing unit 3 for each component (S41), and performs fast inverse Fourier transform. In step S42, an output vector y is created. Then, by arbitrarily extracting K components from the vector y, a K-dimensional vector obtained by randomly projecting the vector x can be obtained. FIG. 5 shows the processing of the multiplication unit 4.

ところで、図6で示すように、上記D次元のベクトルrを左に巡回して構築した行列(巡回行列)をR、D次元ベクトルbを要素にもつ対角行列をBとすると、出力ベクトルyは入力ベクトルxを行列BとRで射影したD次元ベクトルに等しい。なお、本実施形態の説明では左に巡回して構築した行列を用いているが、上記データサンプルベクトル処理部3における複素共役処理を省略した場合は、右に巡回した行列によるランダム射影となる。巡回の方向は右であっても左であっても良い。   By the way, as shown in FIG. 6, if the matrix (cyclic matrix) constructed by circulating the D-dimensional vector r to the left is R and the diagonal matrix having the D-dimensional vector b as an element is B, the output vector y Is equivalent to a D-dimensional vector obtained by projecting the input vector x with the matrices B and R. In the description of the present embodiment, a matrix constructed by cycling to the left is used. However, when the complex conjugate processing in the data sample vector processing unit 3 is omitted, random projection is performed by the matrix that is cycled to the right. The direction of patrol may be right or left.

本次元圧縮装置は、白色雑音の性質から、前処理後のベクトルrとこれを巡回させたベクトルが無相関となる。ゆえに、行列Rから任意にK本の行ベクトルを取り出すと、サイズがK×Dのランダム行列となる。この原理によって、本実施形態に係る次元圧縮装置は従来のランダム射影に類似した性質を持つ次元圧縮を実現している。実用上は、ベクトルnの成分として疑似乱数や無相関な符号{−1,+1}、独立同型分布とみなせる数列等を使用することができる。また従来技術で用いられるランダム行列の任意の1行をベクトルnとして使用することができる。   In the present dimensional compression apparatus, due to the nature of white noise, the pre-processed vector r and the vector obtained by circulating the vector r become uncorrelated. Therefore, if K row vectors are arbitrarily extracted from the matrix R, a random matrix having a size of K × D is obtained. Based on this principle, the dimensional compression apparatus according to the present embodiment realizes dimensional compression having properties similar to the conventional random projection. Practically, pseudorandom numbers, uncorrelated codes {−1, + 1}, sequences that can be regarded as independent isomorphic distributions, and the like can be used as the components of the vector n. Further, any one row of the random matrix used in the conventional technique can be used as the vector n.

以上、本実施形態に係る次元圧縮装置は、ランダム射影で数理的に保証された近似精度について大きな犠牲を払うことなく高い近似計算の精度を維持することができ、更にランダム行列の記憶領域を大きく削減することが可能となり、従来はほぼ不可能であった規模の高次元データにランダム射影を適用することが可能となり、高い近似計算の精度を維持しつつ、より計算コストが小さく、計算時間の増加を抑えた次元圧縮装置、次元圧縮方法及び次元圧縮プログラムを提供することができるようになる。   As described above, the dimension compression apparatus according to the present embodiment can maintain high approximation calculation accuracy without sacrificing the approximation accuracy mathematically guaranteed by random projection, and further increase the storage area of the random matrix. It is possible to reduce the calculation cost, and it is possible to apply random projection to high-dimensional data of a scale that was almost impossible in the past. It is possible to provide a dimension compression apparatus, a dimension compression method, and a dimension compression program in which the increase is suppressed.

本次元圧縮装置、次元圧縮方法及び次元圧縮プログラムは、大量のサンプルを蓄積したデータベースにおける類似したサンプルの検索や分類に用いることが可能であり、産業上の利用可能性がある。   The dimensional compression apparatus, the dimensional compression method, and the dimensional compression program can be used for searching and classifying similar samples in a database in which a large number of samples are accumulated, and have industrial applicability.

1…次元圧縮装置、2…擬似白色雑音ベクトル処理部、3…データサンプルベクトル処理部、4…乗算部
DESCRIPTION OF SYMBOLS 1 ... Dimension compression apparatus, 2 ... Pseudo white noise vector processing part, 3 ... Data sample vector processing part, 4 ... Multiplication part

Claims (3)

ランダム系列の巡回行列又は逆巡回行列をランダム行列として採用する次元圧縮装置。   A dimension compression apparatus that employs a cyclic matrix or a reverse cyclic matrix of a random sequence as a random matrix. 擬似白色雑音ベクトル処理部と、
データサンプルベクトル処理部と、
乗算部と、を有する次元圧縮装置。
A pseudo white noise vector processing unit;
A data sample vector processing unit;
A dimensional compression apparatus having a multiplication unit.
前記擬似白色雑音ベクトル処理部は、擬似白色雑音ベクトルnを作成し、前記疑似白色雑音ベクトルnに対し前処理を行いベクトルrを作成し、前記ベクトルrに高速フーリエ変換を施しベクトルsを作成する請求項2記載の次元圧縮装置。
The pseudo white noise vector processing unit creates a pseudo white noise vector n, pre-processes the pseudo white noise vector n to create a vector r, and performs a fast Fourier transform on the vector r to create a vector s. The dimensional compression apparatus according to claim 2.
JP2009229087A 2009-09-30 2009-09-30 Dimensional compression device, dimensional compression method, and dimensional compression program Withdrawn JP2011077958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2009229087A JP2011077958A (en) 2009-09-30 2009-09-30 Dimensional compression device, dimensional compression method, and dimensional compression program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2009229087A JP2011077958A (en) 2009-09-30 2009-09-30 Dimensional compression device, dimensional compression method, and dimensional compression program

Publications (1)

Publication Number Publication Date
JP2011077958A true JP2011077958A (en) 2011-04-14

Family

ID=44021412

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2009229087A Withdrawn JP2011077958A (en) 2009-09-30 2009-09-30 Dimensional compression device, dimensional compression method, and dimensional compression program

Country Status (1)

Country Link
JP (1) JP2011077958A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015132914A1 (en) * 2014-03-05 2015-09-11 三菱電機株式会社 Data compression apparatus and data compression method
JP2018116693A (en) * 2016-12-19 2018-07-26 三菱電機株式会社 System and method to obtain model of predicted inference on operation and non-temporary computer readable storage medium for them
JP2018522312A (en) * 2015-06-23 2018-08-09 ポリテクニコ ディ トリノ Method and device for image search
CN108833919A (en) * 2018-06-29 2018-11-16 东北大学 Colored single pixel imaging method and system based on random rotation matrix

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015132914A1 (en) * 2014-03-05 2015-09-11 三菱電機株式会社 Data compression apparatus and data compression method
CN106063133A (en) * 2014-03-05 2016-10-26 三菱电机株式会社 Data compression apparatus and data compression method
JPWO2015132914A1 (en) * 2014-03-05 2017-03-30 三菱電機株式会社 Data compression apparatus and data compression method
US9735803B2 (en) 2014-03-05 2017-08-15 Mitsubishi Electric Corporation Data compression device and data compression method
CN106063133B (en) * 2014-03-05 2019-06-14 三菱电机株式会社 Data compression device and data compression method
JP2018522312A (en) * 2015-06-23 2018-08-09 ポリテクニコ ディ トリノ Method and device for image search
JP2018116693A (en) * 2016-12-19 2018-07-26 三菱電機株式会社 System and method to obtain model of predicted inference on operation and non-temporary computer readable storage medium for them
CN108833919A (en) * 2018-06-29 2018-11-16 东北大学 Colored single pixel imaging method and system based on random rotation matrix
CN108833919B (en) * 2018-06-29 2020-02-14 东北大学 Color single-pixel imaging method and system based on random circulant matrix

Similar Documents

Publication Publication Date Title
Lau et al. Large separable kernel attention: Rethinking the large kernel attention design in cnn
Pan et al. Fast vision transformers with hilo attention
Yao et al. Wave-vit: Unifying wavelet and transformers for visual representation learning
Mishra et al. Accelerating sparse deep neural networks
Bethge et al. Meliusnet: Can binary neural networks achieve mobilenet-level accuracy?
Dziedzic et al. Band-limited training and inference for convolutional neural networks
Rahtu et al. Affine invariant pattern recognition using multiscale autoconvolution
EP2638701B1 (en) Vector transformation for indexing, similarity search and classification
Shekhar et al. Analysis sparse coding models for image-based classification
Bui et al. Rosteals: Robust steganography using autoencoder latent space
US9349072B2 (en) Local feature based image compression
Wang et al. Beyond filters: Compact feature map for portable deep model
Han et al. Efficient Markov feature extraction method for image splicing detection using maximization and threshold expansion
Kim et al. Relational self-attention: What's missing in attention for video understanding
JP2011077958A (en) Dimensional compression device, dimensional compression method, and dimensional compression program
Su et al. Lightweight pixel difference networks for efficient visual representation learning
Zhang et al. High‐Order Total Bounded Variation Model and Its Fast Algorithm for Poissonian Image Restoration
Soltani et al. On the information of feature maps and pruning of deep neural networks
Patel et al. An improved image compression technique using huffman coding and FFT
Kwon et al. Lightweight structure-aware attention for visual understanding
Xu et al. A slimmer and deeper approach to network structures for image denoising and dehazing
CN114372169A (en) Method, device and storage medium for searching homologous videos
Valarmathi et al. Iteration-free fractal image compression using Pearson’s correlation coefficient-based classification
Passov et al. Gator: customizable channel pruning of neural networks with gating
Verde et al. Phylogenetic analysis of multimedia codec software

Legal Events

Date Code Title Description
A300 Withdrawal of application because of no request for examination

Free format text: JAPANESE INTERMEDIATE CODE: A300

Effective date: 20121204