JP7379524B2 - ニューラルネットワークモデルの圧縮/解凍のための方法および装置 - Google Patents
ニューラルネットワークモデルの圧縮/解凍のための方法および装置 Download PDFInfo
- Publication number
- JP7379524B2 JP7379524B2 JP2021555874A JP2021555874A JP7379524B2 JP 7379524 B2 JP7379524 B2 JP 7379524B2 JP 2021555874 A JP2021555874 A JP 2021555874A JP 2021555874 A JP2021555874 A JP 2021555874A JP 7379524 B2 JP7379524 B2 JP 7379524B2
- Authority
- JP
- Japan
- Prior art keywords
- idx
- layer
- depth
- size
- neural network
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims description 62
- 238000003062 neural network model Methods 0.000 title description 53
- 230000006835 compression Effects 0.000 title description 9
- 238000007906 compression Methods 0.000 title description 9
- 230000006837 decompression Effects 0.000 title description 9
- 238000013139 quantization Methods 0.000 claims description 73
- 238000013528 artificial neural network Methods 0.000 claims description 40
- 238000012545 processing Methods 0.000 claims description 39
- 238000005192 partition Methods 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 6
- 210000002569 neuron Anatomy 0.000 description 12
- 239000011159 matrix material Substances 0.000 description 11
- 230000008569 process Effects 0.000 description 11
- 238000000638 solvent extraction Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 7
- 230000002093 peripheral effect Effects 0.000 description 6
- 230000000007 visual effect Effects 0.000 description 6
- 238000007667 floating Methods 0.000 description 5
- 101100150621 Arabidopsis thaliana GEBP gene Proteins 0.000 description 4
- 238000013459 approach Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000003491 array Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 239000000779 smoke Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 210000000225 synapse Anatomy 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/60—General implementation details not specific to a particular type of compression
- H03M7/6005—Decoder aspects
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/3059—Digital compression and data reduction techniques where the original information is represented by a subset or similar information, e.g. lossy compression
- H03M7/3064—Segmenting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M7/00—Conversion of a code where information is represented by a given sequence or number of digits to a code where the same, similar or subset of information is represented by a different sequence or number of digits
- H03M7/30—Compression; Expansion; Suppression of unnecessary data, e.g. redundancy reduction
- H03M7/40—Conversion to or from variable length codes, e.g. Shannon-Fano code, Huffman code, Morse code
- H03M7/4031—Fixed length to variable length coding
- H03M7/4037—Prefix coding
- H03M7/4043—Adaptive prefix coding
- H03M7/4068—Parameterized codes
- H03M7/4075—Golomb codes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/124—Quantisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/129—Scanning of coding units, e.g. zig-zag scan of transform coefficients or flexible macroblock ordering [FMO]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/184—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/42—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
- H04N19/423—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/44—Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/11—Selection of coding mode or of prediction mode among a plurality of spatial predictive coding modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/13—Adaptive entropy coding, e.g. adaptive variable length coding [AVLC] or context adaptive binary arithmetic coding [CABAC]
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Computational Linguistics (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Neurology (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
Description
max_cu3d_height=max_ctu3d_height>>depth (式1)
max_cu3d_width=max_ctu3d_width>>depth (式2)
max_cu3d_size=max_ctu3d_size>>depth (式3)
max_ctu_3d_size=
(max_ctu3d_idx==0)?64:(max_ctu3d_idx==1)?32:(max_ctu3d_idx==2)?16:8
(式4)
従って、max_ctu3d_idxがゼロの場合、max_ctu_3d_sizeは64に設定され、max_ctu3d_idxが1の場合、max_ctu_3d_sizeは32に設定され、max_ctu3d_idxが2の場合、max_ctu_3d_sizeは16に設定され、そして、max_ctu3d_idxが0、1、および2のいずれでもない場合、max_ctu_3d_sizeは8に設定され得る。
sublayer bitdepth=
((ndim==1)?max_1dim_bitdepth:max_ndim_bitdepth)-sublayer_delta_bitdepth
(式5)
nnr(){
……
nnr_header()
for(layer_idx=0;layer_idx<total_layer:++layer_layer){
layer_header()
if(enable_max_ctu3d_size)
max_ctu3d_height=max_ctu3d_width=(2**(bitdepth(int(max_ctu3d_size*max_ctu3d_size/R/S))-1)
sublayer_idx=0
do{
if(sublayer_ndim[sublayer_idx]==1){
array1d(sublayer_idx)
++sublayer_idx
}else{
if(sublayer_type[sublayer_idx]==”conv/fc”&&sublayer_type[sublayer_idx+1]==”bias”)
array1d(sublayer_idx+1)
if(layer_scan_order==SCAN_CK){
for(c=0;c<C;c+=max_ctu3d_height){
for(k=0;k<K;k+=max_ctu3d_width){
ctu3d_height=min(max_ctu3d_height,C-c);
ctu3d_width=min(max_ctu3d_width,K-k);
last_ctu3d_flag=(max_ctu3d_height>=C-c&&max_ctu3d_width>=K-k)?1:0
ctu3d(c,k,ctu3d_height,ctu3d_width)
end_of_layer(last_ctu3d_flag)
}
}
}else if(layer_scan_order==SCAN_KC){
for(k=0;k<K;k+=max_ctu3d_width){
for(c=0;c<C;c+=max_ctu3d_height){
ctu3d_height=min(max_ctu3d_height,C-c);
ctu3d_width=min(max_ctu3d_width,K-k);
last_ctu3d_flag=(max_ctu3d_height>=C-c&&max_ctu3d_width>=K-k)?1:0
ctu3d(c,k,ctu3d_height,ctu3d_width)
end_of_layer(last_ctu3d_flag)
}
}
}
sublayer_idx+=(sublayer_type[sublayer_idx]==”conv/fc”&&sublayer_type[sublayer_idx+1]==”bias”)?2:1
}while(sublayer_idx<total_sublayer)
}
}
……
}
nnr_header(){
nnr_header_sizeoptional
……
total_layer
enable_zdep_reorder
enable_max_ctu3d_size
max_ctu3d_idx
max_ndim_bitdepth
max_1dim_bitdepth
……
}
layer_header({
layer_size
……
total_sublayer
for(sublayer_idx=0;sublayer_idx<total_sublayer;++sublayer_idx){
sublayer_sizeoptional
sublayer_ndim optional
sublayer_shape[ndim] optional
sublayer_scan_order
sublayer_sat_maxw
if(sublayer_ndim!=1) optional
sublayer_delta_bitdepth
}
……
}
ctu3d(){
……
ctu3d_header()
cu3d(0,0,0)
……
}
ctu3d_header(){
……
ctu3d_map_mode_flag
if(!ctu3d_map_mode_flag)
map_mode
enable_start_depth
zdep_array()
……
}
zdep_array(){
……
reorder_flag=false
if(enable_zdep_reorder&&zdep_size>2)
reorder_flag
if(!reorder_flag){
for(n=0;n<zdep_size;++n)
zdep_array[n]=n;
return;
}
queue[0]=-1;
for(n=1;n<zdep_size-1;++n){
signalled_flag
queue[n]=(signalled_flag)?1:(queue[n-1]>0)?-2:-1;
}
queue[zdep_size-1]=(queue[zdep_size-2]>0)?-2:-1;
for(n=0;n<zdep_size;++n){
zdep_array[n]=-1;
if(queue[n]==1){
qval_minus_one
queue[n]=qval_minus_one+1;
}
}
qidx=0,zidx=-1;
do{
while(zdep_array[++zidx]!=-1);
if(queue[qidx]==-1){
zdep_array[zidx]=zidx;
}else{
first_zidx=zidx;
while(queue[qidx]!=-2){
zdep_array[zidx]=queue[qidx];
zidx=queue[qidx];
++qidx;
}
zdep_array[zidx]=first_zidx;
zidx=first_zidx;
}
++qidx;
}while(qidx<zdep_size);
……
}
cu3d(depth,y_idx,x_idx){
……
if(cu3d does not exist)
return
if(depth<ctu3d_depth-1){
split_flag
if(split_flag){
cu3d(depth+1,(y_idx<<1),(x_idx<<1))
cu3d(depth+1,(y_idx<<1)+1,(x_idx<<1))
cu3d(depth+1,(y_idx<<1),(x_idx<<1)+1)
cu3d(depth+1,(y_idx<<1)+1,(x_idx<<1)+1)
return
}
}
predicted_codebook()
signalled_codebook()
if(ctu3d_map_mode_flag)
map_mode
start_depth_delta=0
if(enable_start_depth)
start_depth_delta
start_depth=total_depth-1-start_depth_delta
if(map_mode==0){
uni_mode
if(uni_mode)
unitree3d(start_depth,0,0,0,0,false)
else
octree3d(start_depth,0,0,0,0,false)
}elseif(map_mode==1)
tagtree3d(start_depth,0,0,0,0,false)
escape()
……
}
predicted_codebook(){
……
abs_predicted_diff
if(abs_predicted_diff)
sign
predicted_size=(sign?-int(abs_predicted_diff):abs_predicted_diff)+prev_predicted_size
for(p=0,n=0;n<max_predictor_size;++n){
predicted_flag
if(predicted_flag){
predicted[p]=n
codebook[n]=predictor[predicted[p++]]
}
if(p==predicted_size)
break
}
……
}
signalled_codebook(){
……
signalled_size=0
if(predicted_size<max_codebook_size)
signalled_size
codebook_size=predicted_size+signalled_size
prev=(predicted_size)?abs(codebook[predicted_size-1]):0
for(n=predicted_size;n<codebook_size;n++){
delta=exist=0
if(n>=2)
for(m=0;m<n-1;m++)
if(abs_codebook[m]==abs_codebook[n-1])
exist=1
if(exist)
nzflag_delta=1
else
nzflag_delta
if(nzflag_delta){
sign_delta
abs_delta
delta=(sign_delta?-int(abs_delta):abs_delta)
}
abs_codebook[n]=delta+prev
prev=abs_codebook[n]
}
for(n=predicted_size;n<codebook_size;n++){
sign
codebook[n]=(sign?-int(abs_codebook[n]):abs_codebook[n])
}
……
}
unitree3d(start_depth,depth,z_idx,y_idx,x_idx,skip){
……
zs_idx=(depth==total_depth-1)?zdep_array[z_idx]:z_idx
if(depth<total_depth-1){
nzflag=utree[depth][zs_idx][y_idx][x_idx]=0
if(depth>=start_depth){
if(!skip){
nzflag
utree[depth][zs_idx][y_idx][x_idx]=nzflag
if(!nzflag){
map_nzflag
if(map_nzflag){
if(codebook_size){
cmap_val
map_val=cmap_val
}else{
qmap_val
map_val=qmap_val
}
}
}
}
}
next_z_idx=(z_idx<<1)
next_y_idx=(y_idx<<1)
next_x_idx=(x_idx<<1)
bskip=(depth>=start_depth)?!nzflag:false;
if(location[next_z_idx][next_y_idx][next_x_idx]exist in next depth)
unitree3d(start_depth,depth+1,next_z_idx,next_y_idx,next_x_idx,bskip)
if(location[next_z_idx][next_y_idx][next_x_idx+1]exist in next depth)
unitree3d(start_depth,depth+1,next_z_idx,next_y_idx,next_x_idx+1,bskip)
if(location[next_z_idx][next_y_idx+1][next_x_idx]exist in next depth)
unitree3d(start_depth,depth+1,next_z_idx,next_y_idx+1,next_x_idx,bskip)
if(location[next_z_idx][next_y_idx+1][next_x_idx+1]exist in next depth)
unitree3d(start_depth,depth+1,next_z_idx,next_y_idx+1,next_x_idx+1,bskip)
if(location [next_z_idx+1][next_y_idx][next_x_idx]exist in next depth)
unitree3d(start_depth,depth+1,next_z_idx+1,next_y_idx,next_x_idx,bskip)
if(location[next_z_idx+1][next_y_idx][next_x_idx+1]exist in next depth)
unitree3d(start_depth,depth+1,next_z_idx+1,next_y_idx,next_x_idx+1,bskip)
if(location[next_z_idx+1][next_y_idx+1][next_x_idx]exist in next depth)
unitree3d(start_depth,depth+1,next_z_idx+1,next_y_idx+1,next_x_idx,bskip)
if(location[next_z_idx+1][next_y_idx+1][next_x_idx+1]exist in next depth)
unitree3d(start_depth,depth+1,next_z_idx+1,next_y_idx+1,next_x_idx+1,bskip)
return
}
if(start_depth=total_depth-1||utree[depth-1][z_idx>>1][y_idx>>1][x_idx>>1]){
map_nzflag
if(map_nzflag){
if(codebook_size){
index
map[zs_idx][y_idx][x_idx]=index
}else{
sign
abs_q
map[zs_idx][y_idx][x_idx]=(sign?-int(abs_q):abs_q)
}
}
}else{
sign=0
if(!codebook_size&&map_val)
map_sign
map[zs_idx][y_idx][x_idx]=(sign?-int(map_val):map_val)
}
……
}
octree3d(start_depth,depth,z_idx,y_idx,x_idx,skip){
……
zs_idx=(depth==total_depth-1)?zdep_array[z_idx]:z_idx
proceed=nzflag=oct[depth][zs_idx][y_idx][x_idx]=1
if(depth>=start_depth){
if(!skip){
nzflag
oct[depth][zs_idx][y_idx][x_idx]=nzflag
}
proceed=nzflag
}
if(proceed){
if(depth<total_depth-1){
skip=false
next_z_idx=(z_idx<<1)
next_y_idx=(y_idx<<1)
next_x_idx=(x_idx<<1)
if(location[next_z_idx][next_y_idx][next_x_idx]exist in next depth){
octree3d(start_depth,depth+1,next_z_idx,next_y_idx,next_x_idx,skip)
}
if(location[next_z_idx][next_y_idx][next_x_idx+1]exist in next depth){
if(depth>=start_depth)
skip=this is the last child node and bit value of all other child nodes are zero
octree3d(start_depth,depth+1,next_z_idx,next_y_idx,next_x_idx+1,skip)
}
if(location[next_z_idx][next_y_idx+1][next_x_idx]exist in next depth){
if(depth>=start_depth)
skip=this is the last child node and bit value of all other child nodes are zero
octree3d(start_depth,depth+1,next_z_idx,next_y_idx+1,next_x_idx,skip)
}
if(location[next_z_idx][next_y_idx+1][next_x_idx+1]exist in next depth){
if(depth>=start_depth)
skip=this is the last child node and bit value of all other child nodes are zero
octree3d(start_depth,depth+1,next_z_idx,next_y_idx+1,next_x_idx+1,skip)
}
if(location[next_z_idx+1][next_y_idx][next_x_idx]exist in next depth){
if(depth>=start_depth)
skip=this is the last child node and bit value of all other child nodes are zero
octree3d(start_depth,depth+1,next_z_idx+1,next_y_idx,next_x_idx,skip)
}
if(location[next_z_idx+1][next_y_idx][next_x_idx+1]exist in next depth){
if(depth>=start_depth)
skip=this is the last child node and bit value of all other child nodes are zero
octree3d(start_depth,depth+1,next_z_idx+1,next_y_idx,next_x_idx+1,skip)
}
if(location[next_z_idx+1][next_y_idx+1][next_x_idx]exist in next depth){
if(depth>=start_depth)
skip=this is the last child node and bit value of all other child nodes are zero
octree3d(start_depth,depth+1,next_z_idx+1,next_y_idx+1,next_x_idx,skip)
}
if(location[next_z_idx+1][next_y_idx+1][next_x_idx+1]exist in next depth){
if(depth>=start_depth)
skip=this is the last child node and bit value of all other child nodes are zero
octree3d(start_depth,depth+1,next_z_idx+1,next_y_idx+1,next_x_idx+1,skip)
}
return
}
if(codebook_size){
index
map[zs_idx][y_idx][x_idx]=index
}else{
sign
abs_q
map[zs_idx][y_idx][x_idx]=(sign?-int(abs_q):abs_q)
}
}
……
}
tagtree3d(start_depth,depth,z_idx,y_idx,x_idx,skip){
……
zs_idx=(depth==total_depth-1)?zdep_array[z_idx]:z_idx
proceed=nzflag=1
if(depth)
tgt[depth][zs_idx][y_idx][x_idx]=tgt[depth-1][z_idx>>1][y_idx>>1][x_idx>>1]
if(depth>=start_depth){
if(!skip){
if(codebook_size){
if(depth==start_depth){
index=0
nzflag_index
if(nzflag_index)
index
tgt[depth][zs_idx][y_idx][x_idx]=index
}else{
delta_index
tgt[depth][zs_idx][y_idx][x_idx]=
tgt[depth-1][z_idx>>1][y_idx>>1][x_idx>>1]-delta_index
}
}else{
if(depth==start_depth){
abs_q=0
nzflag_q
if(nzflag_q)
abs_q
tgt[depth][zs_idx][y_idx][x_idx]=abs_q
}else{
delta_abs_q
tgt[depth][zs_idx][y_idx][x_idx]=
tgt[depth-1][z_idx>>1][y_idx>>1][x_idx>>1]-delta_abs_q
}
}
nzflag=(tgt[depth][zs_idx][y_idx][x_idx]!=0)
}
if(depth==total_depth-1&nzflag&&codebook_size==0){
sign_q
tgt[depth][zs_idx][y_idx][x_idx]=
(sign?-int(tgt[depth][zs_idx][y_idx][x_idx]):tgt[depth][zs_idx][y_idx][x_idx])
}
proceed=nzflag
}
if(proceed){
if(depth<total_depth-1){
skip=false
next_z_idx=(z_idx<<1)
next_y_idx=(y_idx<<1)
next_x_idx=(x_idx<<1)
if(location[next_z_idx][next_y_idx][next_x_idx]exist in next depth){
tagtree3d(start_depth,depth+1,next_z_idx,next_y_idx,next_x_idx,skip)
}
if(location[next_z_idx][next_y_idx][next_x_idx+1]exist in next depth){
if(depth>=start_depth)
skip=this is the last child node and value of all other child nodes are smaller than value of parent node
tagtree3d(start_depth,depth+1,next_z_idx,next_y_idx,next_x_idx+1,skip)
}
if(location[next_z_idx][next_y_idx+1][next_x_idx]exist in next depth){
if(depth>=start_depth)
skip=this is the last child node and value of all other child nodes are smaller than value of parent node
tagtree3d(start_depth,depth+1,next_z_idx,next_y_idx+1,next_x_idx,skip)
}
if(location[next_z_idx][next_y_idx+1][next_x_idx+1]exist in next depth){
if(depth>=start_depth)
skip=this is the last child node and value of all other child nodes are smaller than value of parent node
tagtree3d(start_depth,depth+1,next_z_idx,next_y_idx+1,next_x_idx+1,skip)
}
if(location[next_z_idx+1][next_y_idx][next_x_idx]exist in next depth)
if(depth>=start_depth)
skip=this is the last child node and value of all other child nodes are smaller than value of parent node
tagtree3d(start_depth,depth+1,next_z_idx+1,next_y_idx,next_x_idx,skip)
if(location[next_z_idx+1][next_y_idx][next_x_idx+1]exist in next depth){
if(depth>=start_depth)
skip=this is the last child node and value of all other child nodes are smaller than value of parent node
tagtree3d(start_depth,depth+1,next_z_idx+1,next_y_idx,next_x_idx+1,skip)
}
if(location[next_z_idx+1][next_y_idx+1][next_x_idx]exist in next depth){
if(depth>=start_depth)
skip=this is the last child node and value of all other child nodes are smaller than value of parent node
tagtree3d(start_depth,depth+1,next_z_idx+1,next_y_idx+1,next_x_idx,skip)
}
if(location[next_z_idx+1][next_y_idx+1][next_x_idx+1]exist in next depth){
if(depth>=start_depth)
skip=this is the last child node and value of all other child nodes are smaller than value of parent node
tagtree3d(start_depth,depth+1,next_z_idx+1,next_y_idx+1,next_x_idx+1,skip)
}
return
}
map[zs_idx][y_idx][x_idx]=tgt[depth][zs_idx][y_idx][x_idx]
}
……
}
escape(){
……
if(codebook_size)
for(z=0;z<cu_cdepth;++z)
for(y=0;y<cu_height;++y)
for(x=0;x<cu_width:++x)
if(map[z][y][x]==codebook_size){
q=0
nzflag
if(nzflag){
sign
abs_q
q=(sign?-int(abs_q):abs_q)
}
}
……
}
Claims (11)
- ニューラルネットワークのデコーディングのための方法であって、
プロセッサによって、ニューラルネットワークの表現に対応するビットストリームから、前記ニューラルネットワーク内で複数ブロックに適用される少なくともシンタックス要素をデコーディングするステップと、
前記プロセッサによって、前記ビットストリームから、前記シンタックス要素に基づいて、前記複数ブロックの重み係数を再構成するステップと、
を含み、前記方法は、さらに、
ニューラルネットワーク表現(NNR)ヘッダから、カーネルサイズに基づいて、コーディングツリーユニット(CTU)サイズを変更するか否かを示すフラグをデコーディングするステップと、
前記フラグによって示される前記カーネルサイズに基づく前記CTUサイズの変更をイネーブルすることに応じて、前記カーネルサイズに基づいて、前記CTUサイズを更新するステップと、
前記更新されたCTUサイズに基づいて、前記複数ブロックである重みテンソルをCTUへと分割するステップと、
ビットストリームから、前記CTUの前記重み係数を再構成するステップと、
を含む、方法。 - 前記方法は、さらに、
前記ビットストリーム内のニューラルネットワーク表現(NNR)ヘッダから、コーディングツリーユニット(CTU)サイズを示すインデックスをデコーディングするステップと、
前記インデックスによって示される前記CTUサイズに基づいて、前記複数ブロックである重みテンソルをCTUへと分割するステップと、
前記ビットストリームから、前記CTUの前記重み係数を再構成するステップと、
を含む、請求項1に記載の方法。 - 前記方法は、さらに、
前記ビットストリームから、CTU内のパーティションを示す1つ以上の分割フラグをデコーディングするステップと、
前記1つ以上の分割フラグに基づいて、前記CTUをコーディングユニット(CU)へと分割するステップと、
を含み、
前記複数ブロックは、前記CTUへと分割されている、
請求項1に記載の方法。 - 前記方法は、さらに、
少なくとも前記シンタックス要素に基づいて、レイヤ内の量子化重み係数についてビット深度を決定するステップと、
前記ビット深度に基づいて、前記量子化重み係数についてメモリ空間を割り当てるステップと、
前記割り当てられたメモリ空間を使用して、前記ビットストリームから、前記レイヤ内の量子化重み係数をデコーディングするステップと、
を含む、請求項1に記載の方法。 - 前記方法は、さらに
ニューラルネットワーク表現(NNR)ヘッダから、グローバルビット深度をデコーディングするステップと、
レイヤに対するレイヤヘッダから、前記グローバルビット深度からの前記レイヤのサブレイヤ内の量子化重み係数についてビット深度の差異をデコーディングするステップと、
前記グローバルビット深度と、前記グローバルビット深度からの前記ビット深度の差異との組み合わせに基づいて、前記サブレイヤ内の前記量子化重み係数について前記ビット深度を決定するステップと、
を含む、請求項4に記載の方法。 - 前記方法は、さらに
レイヤヘッダから、レイヤ内の前記複数ブロックのスキャン順序を示すフラグをデコーディングするステップと、
前記スキャン順序に従って、前記ビットストリームから、前記複数ブロックをデコーディングするステップと、
を含む、請求項1記載の方法。 - 前記方法は、さらに、
レイヤヘッダから、レイヤ内の次元の数、前記レイヤの形状、前記レイヤ内のコーディングユニットのスキャン順序、前記レイヤ内の飽和最大値、および、前記レイヤ内の量子化ステップサイズ、のうち少なくとも1つをデコーディングするステップと、
を含む、請求項1記載の方法。 - 前記方法は、さらに、
バイアスサブレイヤおよび別のサブレイヤを含むレイヤに応じて、前記レイヤの前記別のサブレイヤをデコーディングする前に、前記ビットストリームから、前記レイヤの前記バイアスサブレイヤをデコーディングするステップと、
を含む、請求項1記載の方法。 - ニューラルネットワークのデコーディングのための装置であって、処理回路を含み、
前記処理回路は、請求項1乃至8いずれか一項に記載の方法を実行するように構成されている、
装置。 - コンピュータ実行可能命令を保管しているコンピュータで読取り可能な記憶媒体であって、命令が実行されると、請求項1乃至8いずれか一項に記載の方法を前記コンピュータに実施させる、
コンピュータで読取り可能な記憶媒体。 - エンコーダにおけるニューラルネットワークのエンコーディングのための方法であって、
プロセッサによって、ニューラルネットワークの表現に対応するビットストリームへと、前記ニューラルネットワーク内で複数ブロックに適用する少なくともシンタックス要素をエンコーディングするステップと、
前記プロセッサによって、前記ビットストリームに対して、前記シンタックス要素に基づいて、前記複数ブロックの重み係数を構成するステップと、
を含み、前記方法は、さらに、
ニューラルネットワーク表現(NNR)ヘッダへと、カーネルサイズに基づいて、コーディングツリーユニット(CTU)サイズを変更するか否かを示すフラグをエンコーディングするステップと、
前記フラグによって示される前記カーネルサイズに基づく前記CTUサイズの変更をイネーブルすることに応じて、前記カーネルサイズに基づいて、前記CTUサイズを更新するステップと、
前記更新されたCTUサイズに基づいて、前記複数ブロックである重みテンソルをCTUへと分割するステップと、
ビットストリームに対して、前記CTUの前記重み係数を構成するステップと、
を含む、方法。
Applications Claiming Priority (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201962939057P | 2019-11-22 | 2019-11-22 | |
US201962939054P | 2019-11-22 | 2019-11-22 | |
US62/939,054 | 2019-11-22 | ||
US62/939,057 | 2019-11-22 | ||
US202062958697P | 2020-01-08 | 2020-01-08 | |
US62/958,697 | 2020-01-08 | ||
US17/081,642 US11671110B2 (en) | 2019-11-22 | 2020-10-27 | Method and apparatus for neural network model compression/decompression |
US17/081,642 | 2020-10-27 | ||
PCT/US2020/060253 WO2021101790A1 (en) | 2019-11-22 | 2020-11-12 | Method and apparatus for neural network model compression/decompression |
Publications (2)
Publication Number | Publication Date |
---|---|
JP2022525897A JP2022525897A (ja) | 2022-05-20 |
JP7379524B2 true JP7379524B2 (ja) | 2023-11-14 |
Family
ID=75974383
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
JP2021555874A Active JP7379524B2 (ja) | 2019-11-22 | 2020-11-12 | ニューラルネットワークモデルの圧縮/解凍のための方法および装置 |
Country Status (6)
Country | Link |
---|---|
US (2) | US11671110B2 (ja) |
EP (1) | EP4062320A4 (ja) |
JP (1) | JP7379524B2 (ja) |
KR (1) | KR20210126102A (ja) |
CN (1) | CN113853613B (ja) |
WO (1) | WO2021101790A1 (ja) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108805261B (zh) | 2017-04-28 | 2021-11-12 | 微软技术许可有限责任公司 | 基于八叉树的卷积神经网络 |
KR20210136123A (ko) * | 2019-11-22 | 2021-11-16 | 텐센트 아메리카 엘엘씨 | 신경망 모델 압축을 위한 양자화, 적응적 블록 파티셔닝 및 코드북 코딩을 위한 방법 및 장치 |
US11496151B1 (en) * | 2020-04-24 | 2022-11-08 | Tencent America LLC | Neural network model compression with block partitioning |
US20220201295A1 (en) * | 2020-12-21 | 2022-06-23 | Electronics And Telecommunications Research Institute | Method, apparatus and storage medium for image encoding/decoding using prediction |
US11876988B2 (en) * | 2021-01-19 | 2024-01-16 | Tencent America LLC | Method and apparatus for task-adaptive pre-processing for neural image compression |
US20220318604A1 (en) * | 2021-03-30 | 2022-10-06 | Amazon Technologies, Inc. | Sparse machine learning acceleration |
US20230013421A1 (en) * | 2021-07-14 | 2023-01-19 | Sony Group Corporation | Point cloud compression using occupancy networks |
CN114847914A (zh) * | 2022-05-18 | 2022-08-05 | 上海交通大学 | 一种基于混合精度神经网络的电阻抗成像方法及系统 |
WO2024128885A1 (ko) * | 2022-12-12 | 2024-06-20 | 현대자동차주식회사 | 블록 파티션 정보를 부가 정보로 이용하는 종단간 비디오 압축을 위한 방법 및 장치 |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016199330A1 (ja) | 2015-06-12 | 2016-12-15 | パナソニックIpマネジメント株式会社 | 画像符号化方法、画像復号方法、画像符号化装置および画像復号装置 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6608924B2 (en) * | 2001-12-05 | 2003-08-19 | New Mexico Technical Research Foundation | Neural network model for compressing/decompressing image/acoustic data files |
US9565437B2 (en) | 2013-04-08 | 2017-02-07 | Qualcomm Incorporated | Parameter set designs for video coding extensions |
KR101817589B1 (ko) * | 2013-07-08 | 2018-01-11 | 미디어텍 싱가폴 피티이. 엘티디. | 3d 비디오 코딩에서 단순화된 cabac 코딩의 방법 |
US10404988B2 (en) * | 2014-03-16 | 2019-09-03 | Vid Scale, Inc. | Method and apparatus for the signaling of lossless video coding |
EP3273692A4 (en) * | 2015-06-10 | 2018-04-04 | Samsung Electronics Co., Ltd. | Method and apparatus for encoding or decoding image using syntax signaling for adaptive weight prediction |
KR20170082356A (ko) | 2016-01-06 | 2017-07-14 | 한국전자통신연구원 | 영상 압축 장치 및 이를 이용한 영상 압축 방법 |
US11032550B2 (en) * | 2016-02-25 | 2021-06-08 | Mediatek Inc. | Method and apparatus of video coding |
CN108271026B (zh) | 2016-12-30 | 2020-03-31 | 上海寒武纪信息科技有限公司 | 压缩/解压缩的装置和系统、芯片、电子装置、方法 |
US10841577B2 (en) | 2018-02-08 | 2020-11-17 | Electronics And Telecommunications Research Institute | Method and apparatus for video encoding and video decoding based on neural network |
EP3562162A1 (en) * | 2018-04-27 | 2019-10-30 | InterDigital VC Holdings, Inc. | Method and apparatus for video encoding and decoding based on neural network implementation of cabac |
US11948074B2 (en) | 2018-05-14 | 2024-04-02 | Samsung Electronics Co., Ltd. | Method and apparatus with neural network parameter quantization |
-
2020
- 2020-10-27 US US17/081,642 patent/US11671110B2/en active Active
- 2020-11-12 KR KR1020217029473A patent/KR20210126102A/ko active IP Right Grant
- 2020-11-12 JP JP2021555874A patent/JP7379524B2/ja active Active
- 2020-11-12 EP EP20889655.5A patent/EP4062320A4/en active Pending
- 2020-11-12 CN CN202080035810.4A patent/CN113853613B/zh active Active
- 2020-11-12 WO PCT/US2020/060253 patent/WO2021101790A1/en unknown
-
2023
- 2023-03-09 US US18/181,347 patent/US11791837B2/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016199330A1 (ja) | 2015-06-12 | 2016-12-15 | パナソニックIpマネジメント株式会社 | 画像符号化方法、画像復号方法、画像符号化装置および画像復号装置 |
Non-Patent Citations (5)
Title |
---|
"Use cases and requirements for neural network compression for multimedia content description and analysis",No. N18731,[online], ISO/IEC JTC 1/SC 29/WG 11,2019年07月12日,Pages 1-38,[令和5年3月23日検索], インターネット,<URL: https://mpeg.chiariglione.org/standards/mpeg-7/compression-neural-networks-multimedia-content-description-and-analysis/use-cases> and <URL: https://mpeg.chiariglione.org/sites/default/files/files/standards/parts/docs/w18731.zip> and <URL: https://mpeg.chiariglione.org/meetings/127>. |
"Working Draft 4 of Compression of neural networks for multimedia content description and analysis",No. N19225,[online], ISO/IEC JTC 1/SC 29/WG 11,2020年05月15日,Pages 1-48,[令和5年3月23日検索], インターネット,<URL: https://mpeg.chiariglione.org/standards/mpeg-7/compression-neural-networks-multimedia-content-description-and-analysis/working> and <URL: https://mpeg.chiariglione.org/sites/default/files/files/standards/parts/docs/w19225_NN_compression_WD4.zip> and <URL: https://mpeg.chiariglione.org/meetings/130>. |
Brandon Reagen, et al.,"Weightless: Lossy weight encoding for deep neural network compression", [online],Proceedings of the 35th InternationalConference on Machine Learning,PMLR 80:4324-4333,2018年,全10頁,[令和5年3月23日検索], インターネット, <URL: https://proceedings.mlr.press/v80/reagan18a.html> and <URL: http://proceedings.mlr.press/v80/reagan18a/reagan18a.pdf>. |
今井 健男(外1名),「機械学習応用システムの開発・運用環境」,情報処理,日本,一般社団法人 情報処理学会,2018年12月15日,Vol.60, No.1,第17~24頁,ISSN: 0447-8053. |
峯澤 彰,「ニューラルネットワークモデル圧縮の国際標準NNR」,映像情報メディア学会誌,日本,一般社団法人 映像情報メディア学会,2021年03月01日,Vol.75, No.2,第246~250頁,ISSN: 1342-6907. |
Also Published As
Publication number | Publication date |
---|---|
US20210159912A1 (en) | 2021-05-27 |
US20230216521A1 (en) | 2023-07-06 |
EP4062320A1 (en) | 2022-09-28 |
US11791837B2 (en) | 2023-10-17 |
US11671110B2 (en) | 2023-06-06 |
CN113853613B (zh) | 2024-09-06 |
JP2022525897A (ja) | 2022-05-20 |
CN113853613A (zh) | 2021-12-28 |
KR20210126102A (ko) | 2021-10-19 |
EP4062320A4 (en) | 2023-04-05 |
WO2021101790A1 (en) | 2021-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7379524B2 (ja) | ニューラルネットワークモデルの圧縮/解凍のための方法および装置 | |
CN112771583A (zh) | 视频编码的方法和装置 | |
JP7408799B2 (ja) | ニューラルネットワークモデルの圧縮 | |
JP7337950B2 (ja) | ニューラルネットワークモデル圧縮のための量子化、適応ブロック分割、及びコードブック符号化の方法及び装置、並びにコンピュータープログラム | |
US20220116610A1 (en) | Method and apparatus for quantization, adaptive block partitioning and codebook coding for neural network model compression | |
JP7285950B2 (ja) | ニューラル・ネットワーク・モデル圧縮のための3次元(3d)ツリー・コーディング方法及び装置 | |
US12101107B2 (en) | Signaling of coding tree unit block partitioning in neural network model compression | |
CN113255878A (zh) | 用于神经网络模型压缩的转义重新排序模式的确定方法和装置 | |
CN112188216B (zh) | 视频数据的编码方法、装置、计算机设备及存储介质 | |
JP2023506035A (ja) | ニューラルネットワークモデル圧縮のための統合ベースの符号化のための方法、装置およびコンピュータプログラム | |
KR20230158597A (ko) | 신경 이미지 압축에서의 온라인 트레이닝 기반 인코더 튜닝 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A621 | Written request for application examination |
Free format text: JAPANESE INTERMEDIATE CODE: A621 Effective date: 20210915 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20221011 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20230110 |
|
A131 | Notification of reasons for refusal |
Free format text: JAPANESE INTERMEDIATE CODE: A131 Effective date: 20230418 |
|
A521 | Request for written amendment filed |
Free format text: JAPANESE INTERMEDIATE CODE: A523 Effective date: 20230713 |
|
TRDD | Decision of grant or rejection written | ||
A01 | Written decision to grant a patent or to grant a registration (utility model) |
Free format text: JAPANESE INTERMEDIATE CODE: A01 Effective date: 20231017 |
|
A61 | First payment of annual fees (during grant procedure) |
Free format text: JAPANESE INTERMEDIATE CODE: A61 Effective date: 20231101 |
|
R150 | Certificate of patent or registration of utility model |
Ref document number: 7379524 Country of ref document: JP Free format text: JAPANESE INTERMEDIATE CODE: R150 |