AU669983B2 - Video signal transmitting system - Google Patents

Video signal transmitting system Download PDF

Info

Publication number
AU669983B2
AU669983B2 AU63362/94A AU6336294A AU669983B2 AU 669983 B2 AU669983 B2 AU 669983B2 AU 63362/94 A AU63362/94 A AU 63362/94A AU 6336294 A AU6336294 A AU 6336294A AU 669983 B2 AU669983 B2 AU 669983B2
Authority
AU
Australia
Prior art keywords
picture
data
macro unit
coding
encoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
AU63362/94A
Other versions
AU6336294A (en
Inventor
Katsuji Igarashi
Veltman Mark
Yoichi Yagasaki
Jun Yonemitsu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of AU6336294A publication Critical patent/AU6336294A/en
Application granted granted Critical
Publication of AU669983B2 publication Critical patent/AU669983B2/en
Anticipated expiration legal-status Critical
Expired legal-status Critical Current

Links

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)

Description

VIDEO SIGNAL TRANSMITTING SYSTEM
J
r BACKGROUND OF THE INVENTION The present invention relates to a video sign transmitting system, and is suitably applied to a R where moving picture signals are transmitted.
In the so-cal led video transmitting system, s as television conference system and television telephone system, video signals representing movin pictures are convent ionally sent to a remote \C destination. The transmission efficiency of significant information is enhanced by efficiently using a transmission capacity of the transmission channel.
For this purpose, the transmitting unit does send al the sequent ial frame pictures but perform so-cal!ed frame dropping processing of the frame pi c tures such as t o remove predetermined frames an then transmits the video signals, In the rece i ving unit, mot i on vec t ors are rec AO from the transmit t ing unit in place of the video signals of the frames removed, and the original vi signals are reconstruc t ed by interpol a t ing the fra: al case uch g not s the d eived deo me _I _h pictures, which have undergone the frame dropping processing, by using the motion vec tors with reference to information of frame pictures before and after them, the motion vectors being transmitted from the t transmi t t ing unit in place of the video signals of the frames dropped (Pat ent Laid-open Publication No, (1985 -28 92).
According to this tschnique, it is theoretically suf f i cient to transmit information of mot ion vectors in O place of information of frame pictures which have been dropped, the former being smaller in amount than the latter Thus, it is considered that the technique a ef f i c i ently sends signif i cant information of the v i d eo signals.
Accordingly, the more frames are dropped, the more ef f i ciently video signals are transmit ted.
When video signals are practically undergone high ef f iciency coding processing and then recorded on a recording medium, such as a compact disc, errors cannot O be however prevented from taking place. Moreover, video signals are reversely reproduced and random accessed and hence when a large amount of frames are dropped, it is difficult to reproduce video signals with high quality.
It is an object of the present invention to provide a method and apparatus for encoding a digital video signal and/or a method and apparatus for decoding a coded data produced by encoding the digital signal.
According to a first aspect of the present invention, there is disclosed an encoding method for encoding a digital video signal including a plurality of pictures, said method comprising the steps of: encoding a first picture of one of said plurality of pictures as an intra coding picture; encoding a second picture of one of said plurality of pictures as a first inter coding picture with the use of a reconstructed picture of a picture temporally situated before the second picture; and encoding a third picture of one of said plurality pictures as a second inter coding picture with the use of a reconstructed picture of a picture temporally situated before the third picture and a reconstructed picture of a picture temporally situated after 15 the third picture, where at least one of said reconstructed picture of a picture temporally situated before the third picture and said reconstructed picture of a picture temporally situated after the third picture is a reconstructed picture of said second picture.
According to a second aspect of the present invention, there is disclosed a decoding method for decoding a coded data produced by encoding a digital video signal including a plurality of pictures, said method comprising the steps of: decoding a coded data of a first picture encoded as a intra coding picture to produce a first decoded picture; decoding a coded data of a second picture encoded as a first intra coding picture with the use of a decoded picture of a pictarf tmporally situated before the second picture to produce a second decoded picture; and decoding a coded data of a third picture encoded as a second intra coding picture with the use of a decoded picture of a picture tempocally situated before the third picture and a decoded picture of a picture temporally situated after the third picture to produce a third decoded picture, where at least one of said decoded picture of IN:\LIBCC100515:mxl a picture temporally situated before the third picture and said decoded picture of a picture temporally situated after the third picture is said second decoded picture.
According to a third aspect of the present invention, there is disclosed an encoding method for encoding a picture being divided into a plurality of macro unit blocks, wherein each macro unit block has a macro unit block header, and being temporally situated between a first picture and a second picture, said method comprising the steps of: selecting one of intra coding, forward predictive coding, backward predictive coding and interpolative predictive coding for each macro unit block; encoding said each macro unit block using the selected one of said intra coding, said forward predictive coding, said backward predictive coding and said interpolative predictive coding; and transmitting information in each macro unit block header identifying which of ."said intra coding, said forward predictive coding, said backward predictive coding and 15 said interpolative predictive coding.
According to a fourth aspect of the present invention, there is disclosed decoding method for decoding a coded data produced by encoding a picture temporally situated between a first picture and a second picture, said method comprising the steps of: 20 separating information from each macro unit block header identifying which of eo ei itra coding, forward predictive coding, backward predictive coding and interpolative predictive coding; and decoding said each macro unit block according to the one of said intra coding, said forward predictive coding, said backward predictive coding and said interpolative predictive coding.
IN:\LIBCCIOO51 -4a According to a fifth aspect of the present invention, there is disclosed an encoding method for encoding a digital video signal including a plurality of pictures, said method comprising the steps of: reordering said plurality of pictures, wherein each picture has a picture header; encoding said reordered plurality of pictures as an intra coding picture or an inter coding picture respectively to produce a coded data; and appending temporal information to each picture header of said coded data identifying an input order of said plurality of pictures.
According to a sixth aspect of the present invention, there is disclosed a decoding method for decoding a coded data produced by encoding a digital video signal including a plurality of pictures, wherein each picture has a picture header, said method comprising the steps of: separating temporal information from each picture header of said coded data identifying an input order of said plurality of pictures in an encoder; 15 decoding said coded data encoded as an intra coding picture or an inter coding picture to produce a plurality of decoded pictures; reordering said plurality of decoded pictures according to said temporal information.
According to a seventh aspect of the present invention, there is disclosed an encoding method for encoding a motion vector, said method comprising the steps of: detecting a motion vector between a first picture and a second picture; dividing a value based on said motion vector by a predetermined value to produce a quotient value and a remaining value; variable length encoding said quotient value with the use of a table of variable length code; and encoding said remainder value.
According to an eighth aspect of the present invention, there is disclosed decoding method for decoding a coded data to produce a motion vector, said method comprising the steps of: IN:ALIBCCI0OO515;mxI 4b separating a motion vector data including a quotient data and a remainder data from said coded data; ard decoding said motion vector data with the use of a table of variable length code.
According to a ninth aspect of the present invention, there is disclosed a motion vector detection method for detecting motion vectors between a plurality of pictures, said method comprising the steps of: detecting a first motion vector between a first picture and a third picture temporally situated between the first picture and a second picture; 1 o determining a search area of a second motion vector between said first picture and said second picture based on said first motion vector and an interval between said first picture and said second picture; and detecting said second motion vector according to said search area.
According to a tenth aspect of the present invention, there is disclosed an 1 encoding apparatus for encoding a digital video signal including a plurality of pictures, comprising: means for encoding a first picture of one of said plurality pictures as an intra coding picture; •means for encoding a second picture of one of said plurality as a first inter 20 coding picture with the use of a reconstructed picture of a picture temporally situat:ed 4 before the second picture; and means for encoding a third picture of one said plurality pictures as a second inter coding picture with the use of a reconstructed picture of a picture temporally situated before the third picture and a reconstructed picture of a picture temporally situated after the third picture, where at least one of said reconstructed picture of a picture temporally situated before the third picture and said reconstructed picture of a picture temporally situated after the third picture is a reconstructed picture of said second picture.
IN:\LIBCC100515:mxi 4c According to an eleventh aspect of the present invention, there is disclosed a decoding apparatus for decoding a coded data produced by encoding a digital video signal including a plurality of pictures, comprising: means for decoding a coded data of a first picture encoded as an intra coding picture to produce a first decoded picture; means for decoding a coded data of a second picture encoded as a first inter coding picture with the use of a decoded picture of a picture femporally situated before the second picture to produce a second decoded picture; and means for decoding a coded data of a third picture encoded as a second inter coding picture with the use of a decoded picture of a picture temporally situated before the third picture and a decoded picture of a picture temporally situated after the third picture to produce a third decoded picture, where at least one of said decoded picture of a picture temporally situated before the third picture and said decoded picture of a picture temporally situated after the third picture is said second decoded picture.
According to a twelfth aspect of the present invention, there is disclosed an encoding apparatus for encoding a picture temporally situated between a first picture and a second picture, said first and second pictures each being divided into a plurality of macro unit blocks, each macro unit block having a macro unit block header, i comprising: means for selecting one of intra coding, forwarding predictive coding, backward predictive coding and interpolative predictive coding for each macro unit block; means for encoding said each macro unit block using the selected one of said intra coding, said forward predictive coding, said backward predictive coding and said interpolative predictive coding; and means for transmitting information in each macro unit block header identifying which of said intra coding, said forward predictive coding, said backward predictive coding and said interpolative predictive coding.
IN,\LIBCCIO0515:mxI 4d According to a thirteenth aspect of the present invention, there is disclosed a decoding apparatus for decoding a coded data produced by encoding a picture temporally situated between a first picture and a second picture, said first and second pictures being divided into a plurality of macro unit blocks, each macro unit block having a macro unit block header, comprising: means for separating information from each macro unit block header identifying which of intra coding forward predictive coding, backward predictive coding and interpolative coding; and means for decoding said each macro unit block according to the one of said intra coding, said forward predictive coding, said backward predictive coding and said interpolative predictive coding.
~According to a fourteenth aspect of the present invention, there is disclosed an encoding apparatus for encoding a digital video signal including a plurality of pictures, Scomprising: means for reordering said plurality of pictures, wherein each picture has a picture header; means for encoding said reordered plurality pictures as an intra coding picture or an inter coding picture respectively to produce a coded data; and means for appending temporal information to each picture header of said coded data identifying an input order of said plurality of pictures.
According to a fifteenth aspect of the present invention, there is disclosed a decoding apparatus for decoding a coded picture produced by encoding a digital video signal including a plurality of pictures, wherein each picture has a picture header, said apparatus comprising: means for separating temporal information from each picture header of said coded data identifying an input order of said plurality of pictures in an encoder; means for decoding said coded data encoded as an intra coding picture or an inter coding picture to produce a plurality of decoded pictures; IN\LIBCCO051 4e means for reordering said plurality of decoded pictures according to said temporal information.
According to a sixteenth aspect of the present invention, there is disclosed an encoding apparatus for encoding a motion vector, comprising: means for detecting a motion vector between a first picture and a second picture; means for dividing a value based on said motion vector by a predetermined value to produce a quotirat value and a remainder value; means for variable length encoding said quotient value with the use of a table of variable length code; and means for encoding said remainder value.
:According to a seventeenth aspect of the present invention, there is disclosed a decoding apparatus for decoding a coded data to produce a motion vector, comprising: means for separating a motion vector data including a quotient data and a remainder data from said coded data; and means for decoding said motion vector data with the use of a table of variable length code.
e oAccording to an eighteenth aspect of the present invention, there is disclosed a motion vector detection apparatus for detecting a motion vectors between a plurality of pictures, comprising: means for detecting a first motion vector between a first picture and a third picture temporally situated between the first picture and a second picture; means for determining a search area of a second motion vector between said first picture and said second picture based on said first motion vector and an interval between said first picture and said second picture; and means for detecting said second motion vector according to said search area.
The nature, principle and utility of the invention will become more apparent from the following detailed description when read in conjunction with the N:\LIBCCI0515:mx 4f accompanying drawings in which like parts are designated by like reference numerals or characters.
BRIEF DESCRIPTION OF THE DRAWINGS In the accompanying drawings: FIG. 1 is a diagrammatic view illustrating a video signal transmitting system according to one embodiment of the present invention; FIG. 2 is a diagrammatic view showing the operation of the video signal transmitting system; FIG. 3 is a block diagram showing the overall construction of the transmitting io unit; FIG. 4 is a block diagram showing a reordering circuit; FIG. 5 is a diagrammatic view showing the operation of the reordering circuit; l *4 l [NALIBCC100515:mnxl FIGS mot ion vec 6(1) and tor detec 6(2) are block diagrams of the ting circuit; FIGS. 7(1) and 7(2) are operation
FIG.
the motion vec is a diagrammaat diagrammatic view showing the tor detecting circui ic view illustrating frame data; FIG. 9 is a characterist ic graph showing priority detect ion of the motion vector; FIG. 10 is a block diagram showing the adapti ve \O prediction circuit; FIG. 11 is a diagrammatic view showing the operation of the adaptive prediction circuit; FIG. 12 is a graph of a characteristic curve illus t rat ing the prior t y selection of the intra rame coding processing and the interframe coding processing; FIG. 13 is a diagrammatic view showing transmission frame data; FIGS. 14-16 are diagrammatic views i llustrat ing headers of the frame data; I FIG. 17 is a block diagram showing receiving uni t FIG. 18 is a diagrammatic view showing the normal mode operation; FIG. 19 is a diagrammatic view showing the reverse mode operation; FIG. 20 is a block diagram showing the adaptive prediction circuit; FIG. 21 is a diagrammatic view showing another embodiment; FIG. 22 is a block diagram illustrating the adaptive predict ion circuit of the another embodiment; FIG. 23 is a diagrammatic view showing the operation of the adaptive prediction circuit; FIG. 24 is a diagrammatic view showing the modified form of the adaptive prediction circuit; p mod if e d f de e o o t he ad a p t i e pred i c t i on c i rcu it 0 FIGS. 25 and 26 are diagrammatic views showing the princinle of detection of the motion vector; FIG. 27 is a block diagram showing the run-legth-Huffman encoding circuit; FIGS. 28 and 29 are diagrammatic views showing the .encoding processing of motion vectors; ~FIGS. 30 and S3 are diagrammatic views i lustrat ing the read only memory circuit; FIGS. 32 and 33 are diagrammatic views showing 0 data of the encoded motion vectors; and FIGS. 34 and 35 are diagrammatic views illustrating the problem.
DETAILED DESCRIPTION OF THE INVENTION B Preferred embodiments of this invent ion wi ll be described with reference to the accompanying drawings.
Principle of Video Signal Transmission When a video signal coding method according to present invention is applied to a video signal transmitting system, video signals are transmitted according to the technique as shown in FIG. 1.
Mo r e speci f i cal ly, the transmitting unit divides v i deo s i gna s D V of frame data PO, F1, P2, F3 into 0\ predetermined groups of frames and sequentially processes them (FIG. In this embodiment, the transmitting unit divides frame data FO, F1, F2, F3 into groups of frames each including a unit of six frames, and the leading I frame data FO, F6 of each group of frames are intraframe coded and then transmitted.
SThe intraframe coding processing refers to a processing in which a compression processing is Se* performed on pictures in such a manner that the 3 difference between p ixel data is obtained, the pixel data being one- or two-dimensionally adjacent to each other along the scanning direction, for example. With this processing, transmission frame data having each picture compressed in the amount of data are c1 constructed.
Thus, in the receiving unit, frame data for one frame can be reconstructed by sequentially adding transmission frame data, intraframe coded, for one frame.
In the transmitting unit, frame data Fl, F2, F3 except the leading frame data FO, F6 of each group of frames are interframe coded and then transmitted.
\O The interframe coding refers to a processing in which af ter a motion vector is detected between frame data of a predicted frame, which serves as a reference, and frame data to be coded, frame data (hereinafter referred to as predicted result frame data) are S6 produced by shifting the frame data of the predicted frame by amount of the motion vector. The difference in data between the predicted result frame data and a frame data to be coded is coded together with the motion vector to produce transmission frame data.
Thus, in the transmitting unit, with respect to each of frame data Fl, F2, F3 except the leading frame data FO, F6 of each group of frames, a motion vector in connection with a predetermined predicted frame is detected and the interframe coding processing _C I l is carried out.
In addi t ion, in the transmitting unit two pred i c t ed frames are assigned to each of frame data FI, F2, F3 and a motion vector is detected about each S predicted frame.
Furthermore in the transmitting unit, predicted result frame data are produced from frame data of respective predicted frame with reference to two detected motion vectors, and then the result ing two \O kinds of predicted result frame data are interpolated to generate interpolative predicted result frame data.
The interframe coding is performed by selecting frame data, of which di f ference data is the small est from e the predicted result frame data and interpolative l\b predicted result frame data; that is, selective prediction processing is carried out. Hereinaf ter, Spred i c t i on in which frame data, inpu t ted before frame data to be coded, is used as a predicted frame is cal led forward prediction; predict ion in which frame 0O data, input t ed af ter frame data to be coded, is used as a predicted frame is called backward prediction; and prediction in which interpolative predicted result frame data are used is cal led interpolative prediction.
Thus, the t ransmitting unit se lect i ve ly performs the interframe coding processing so that the transmission frame data become a minimum in the amount of data, and thereby v i deo signa s are transmitted with improved transmission efficiency.
When the interframe coding processing is carried out in the transmitting unit, firstly the fourth frame data F3, F9 of each group of frames are interframe coded with previous frame data FO, F6 and the next frame data F6, F12 set as pred i c ted frames (here i naf t er O1 referred to as processing level Subsequently, the remaining frame picture Fl, F2, F4, F5 are interframe coded with previous frame data FO, F3 and the next frame data F3 F6 being set as predicted f fe frames (hereinaf ter referred to as processing level 2).
t 1 The interframe coding processing is small in data amount to be transmitted as compared to the intraframe coding processing. Thus, the more frame data are interframe coded, the smaller the whole video signals to be transmitted become in data amount.
a.
.2 0 However, when frame data to be interframe coded increase, frame data which are far away from predicted frames, referred to by the frame data, must be interframe coded. Thus, a motion vector must be detected between frame data which are far away from 9ab each o ther, w i th the resul t in compl icated detect ion of the motion vector. Particularly, in the selective predicted processing, the transmit t ing unit becomes complicated since motion vectors to be detected S increase.
In the embodiment, frame data F3 are interframe c ded with frame data FO and F6 set as predicted frames. Then, the frame data F3, FO and F6 are set as predicted frames, and frame data FI, P2, F4, 5 \o between them are interframe coded. With these r procedures, motion vectors can be detected between relatively close frame data, so that it is possible to efficiently transmit video signals with a simple construction.
1 Thus, in the interframe coding processing at the I* level 1, the transmitting unit sets the leading frame data FO of a group of frames and the leading frame data F6 of the subsequent group of frames as reference predicted frames for detecting motion vectors to r performing forward and backward predictions.
More specifically, the transmitting unit detects the motion vector MV3P between the frame data FO and the fourth frame data F3 for the forward prediction and the motion vector MV3N between frame data F6 and F3 for 1 the backward pred i ction (FIG. 1 Then, the frame data FO and F6 of pred i c t ed frames are sh i f ted by amount of the mo t i on vector MV3P and MV3N to construct predicted result frame data FP and FN for forward and backward predictions, respec t i v e ly Subsequen t ly, the t ransmi t t i ng un i t l i nearly interpolates predicted result frame data FP and FN to generate predicted result frame data FPN for interpolative prediction.
After di fference data LFP, AFN, AFPN which are the e diference in data between frame data F3 and predicted result frame data FP, FN, FPN are obtained, the S transmi t t ing unit selects the smallest difference data among the difference data AFP, AFN and AFPN, and 16" converts it to transmission frame data F3X together with motion vectors MV3P and MV3N (FIG. Thus, in the receiving uni t the original frame data FO and F6 are reconstructed from the transmission frame data FOX and F6X, and then the or ig inal frame aO0 data F3 can be reconstructed on the basis of the reconstructed frame data FO and F6 and the transmission frame data F3X.
On the other hand, in the processing at the level 2, the transmi t t ing unit sets the leading frame data FO and F6 are the fourth frame data F3 and F9 as predicted frames for the first and the second frame data Fl and F2, F7 and F8, and then forward and backward predictions are performed.
Thus, in the transmitting unit, motion vectors MV1P and MV1N; MV2P and MV2N are detected with reference to frame data FO and F3 (FI 1 Then, pred i c t ed resu t frame data FP and FN are cons t ruc t ed with reference to the motion vectors MV1P and MVIN; 'O MV2P and MV2N, respectively, and interpolat i ve Spredicted result frame data FPN are also constructed.
After difference data AFP, AFN and AFPN are obtained with reference to the predicted result frame data FP, FN and FPN, respect i ei the smallest 1 difference data among the dif i e nce data AFP, AFN and AFPN are selected and converted to transmission frame data FIX and F2X together with the motion vectors MVIP and MV1N; MV2P and MV2N.
Similarly, the fourth frame data F3 and the o leading frame data F6 of the subsequent group of frames are s e t as predicted frames for the fif th and the s i xth frame data F4 and F5; F10 and Fll; When motion vectors MV4P and MV4N; MV5P and are detected, the transmi tt ing unit constructs W predicted result frame data FP, FN and FPN with reference to motion vectors MV4P and MV4N; MV5P and and then frame data FPN to produce difference data AFP, AFN and AFPN. Then, the smallest difference data among the difference data AFP, AFN and AFPN is selected and converted to transmission frame data F4X and F5X together with mot ion vectors MV4P and MV4N; and Thus, the flame data are separated into units of 6 0 frames and are processed in a combination of the intraf rame coding processing and the interframe coding S processing and then transmitted, Frame data FO, PF wh i ch have been intraframe coded and then sent are reconstructed and then the remaining frame data are subsequently reconstructed. If an error occurs, the error is thus prevented from being transmitted to the other group of frames, and hence when the invention is p applied to compact d i scs or the like, high p i c ture quality video signals can be transmitted at high A e f iciency.
Moreover, when inversely reproduced or random accessed, frame data can be positively reconstructed.
Thus, degradation of the picture quality is ef fect ively prevented and video signals can be highly efficiently a V® t ransmi t ed.
In this embodiment, transmiss ion frame data are reordered in each group of frames in the order of the intraframe coding processing and the S in t e rframe coding process ing and then transmit ted (FIG.
1 E When transm itted, ident if icat ion data representing predicted frame data and transmission frame data intraframe coded are added to each of the picture data FOX-F6X.
\0 That is, frame data Fl and F2; F4 and F5 require rame data FO and F3; F3 and F6 which are predicted S' frames for encoding and decoding, respec t i vely For the frame data F3, the frame data FO and F6 e which are predicted frames are needed for encoding and decoding.
When the frame data to be intraframe coded are Srepresented by a character A and the frame data to be p rocessed at the levels 1 and 2 characters B and C as shown in FIG. 2, the transmi t t ing unit outputs O transmission frame data DATA (FIG. in the order of reference frame data AO, B3, Cl, C2, C4, CS, A6, B9 In this operation, the transmitting unit transmits a pred i c t i on index PINDEX, f orward pred i ct ion ref erence Q index PID (FIG. backward prediction reference index NID (FIG. together with the transmission frame data; the predic ion index PINDEX is for iden t ifying forward prediction, backward prediction or g interpolative prediction and the forward prediction reference index PID and the backward prediction reference index NID represent predicted frames of the forwar d prediction and backward predict ion, respect i ve ly. With these indexes the receiving unit 1 decodes transmission frame data with ease.
In practice, such transmission of prediction index PINDEX for identifying forward prediction, backward prediction or interpolative prediction, forward prediction reference index PID and backward prediction r' reference index NID representing predicted frame together with transmission frame data, not only facilitates decoding in the receiving unit but also enables decoding of the original data with ease even if transmission frame data are transmitted in a format QU different in length of group of frames, processed frames at leve s 1 and 2, etc. from the formal of this embodiment.
More specifically, original frame data can be decoded by shifting frame data of the pred i c ted frame,
L-
O which is iden t f ied by the forward predict i on ref erence index PID and backward prediction reference index N[D according to the prediction index PINDEX, in the amount of the motion vector thereof and then by adding the S difference data transmitted.
Thus, the operability of the who e v i d eo signal transmission system is enhanced since video signals which are encoded in a d f ferent f orma t can be easily decoded.
\O Moreover, format may be selectively changed in a .d video signal or in a single recording medium, and hence a* a high picture qual ity moving video signals can be eas ily transmitted.
Construction of the Embodiment Structure of the Transmitting Unit In FIG 3, numeral 1 indicates a transmitting unit S* of the video signal transmission system to which is app I ied the v i deo s i gnal transmission method above de s c r i bed. The transmitting unit h i gh y e f f i c i en t ly O encodes and converts input video signals VDIN to transmission frame data DATA and then records them on a compact disc.
The transmission unit 1 provides an input video signal VDIN to a picture data inputting unit 2, where a 0
S
0
S.
S
luminance s ivnal and a chrominance signal which constitute hP input video signal VDIN is converted to a digital signal and then the amount of data is reduced to 1/4.
More speci f ically, the picture data inpu t t ing unit 2 provides the luminance signal which has been converted to the digital signal to one field dropping circuit (not shown) to delete one field, and then the remaining one field of the luminance signal is removed O every o t her l ine.
The picture data inputting unit 2 delete one field from each of the two chrominance signals which have been converted to digital signals and then selectively outputs every other line of the chrominance signals.
The picture data inputting unit 2 converts the luminance signals thinned, and the chrominance signals, select i v e ly out put ted, to data of a prede t erm ined transmission rate through a time axis conversion circuit.
A
O With these operations, the input video signal VDIN is preliminarily processed through the picture data input t ing unit 2, so that picture data D V which continuously contain sequential frame data described above are constructed.
W When a start pulse signal ST is inpu t t ed, t he reordering circuit 4 separates picture data D which are sequentially inputted in the order of frame data AO, Cl, C2, B3, C4, C5, A6, C7, into groups of frames in six frames, and then the reordering circuit 4 reorders them in the order to be encoded; AO, A6, B3, Cl C2, C4 C5, A12, B C7, and outputs them.
The subsequent intraframe coding and interframe coding is simplified by reordering the frame data in the order to encode in such a manner.
When an end pulse signal END rises, the reordering circuit 4 stops output ting of the frame data after the frame data, inputted immediately before the rise of the end pulse signal, is reordered.
The reordering circuit 4 outputs frame group index GOF, forward predict ion reference index PID, backward Spre d i c t i on r erence e r e index NID and temporary index TR.
The frame group index GOF raises its signal level at the head of each group of frames, and the temporary 0 index TR represent the order of the frame data in each group of frames.
A mot ion vector detecting circuit 6 receives the reordered picture data D N and processes each frame data by separating it into predetermined macro unit
_I
Sblocks.
In this processing, the motion vector detecting circuit 6 delays frame data AO, A6, which are to be intraframe coded for a predetermined time and outputs them for every macro unit block thereof to a subtracting circuit 8 whereas with respect to frame data BS, Cl, C2, C4 to be interframe coded, the motion vector detecting circuit 6 detects motion vectors MVP and MVN for each macro unit block with reference to predetermined predicted frames.
Furthermore, in this processing, the motion vector e d, detecting circuit 6 obtains the difference data between frame data to be interframe coded and corresponding predicted result frame data in an absolute value summing circu i t to thereby obtain error data ER which is the sum of absolute value of the difference data.
Thus, in this embodiment, quantized stepsize or the like is switched by using the error data ER, so S" O that. degradation in picture qual i ty i s e f f ec t ve ly avoided and video s ignals are efficiently transmitted.
In addition, the mo t ion vector dete c t ing c i rcuit 6 delays the frame group index GOF, forward prediction reference index PID, backward prediction reference I I index NID and temporary index TR together with the reordered picture data DVN [or a motion ve c t or detection processing time and then outputs them for each macro unit block to the circu it.
The subtracting circuit data DZ by obtaining the diff predicted data DpR outputte prediction circuit 10, and pi O the difference data DZ to a d circuit 12.
In the intraframe coding succeeding processing 8 generates difference erence in data between d from an adaptive cture data DVN and outputs iscrete cosine conversion .s r processing, the adaptive a mean value of picture prediction circuit 10 outputs oeo data of each pixel as predicted data DPRI for each m macro unit block.
On the other hand, in the interframe coding processing the adaptive prediction circuit 10 selects one of forward prediction, backward prediction and interpolative prediction by carrying out a selective c predict ion processing, and then the adaptive prediction circu i t 10 outputs selected predicted resu t frame data as predicted data DpRI for each macro unit block.
This enables difference data Dz (which corresponds to the smallest amount of data among the difference -L _I P data AFP, AFNP and AFN) to be obtained about frame da t a to be inter f rame coded whereas abou t frame data to be intraframe coded, difference data DZ from the means value can be obtained.
The discrete cosine transformation circuit 12 converts difference data DZ for each macro unit b ock by means of the DCT (discrete cosine transform) technique.
A mul t ipl icat ion c ircuit 14 pe r f orms a weight ing O processing of output data from the discrete cosine transformation circuit 12 according to control data t out put ted from a weighting control c i rcuit 16 Sight of a human being does not recognize degradation in picture quality of a display picture of which brightness changes in a short period, for *ee example, even if video signals are roughly quantized and transmitted.
e On the contrary, degradation of the picture qual i t is sharply recognized about a region where O brightness gradually changes when video signals of that region roughly quantized are sent.
Accordingly, video signals are roughly quant ized about the region where brightness changes in a short period and the quant ized stepsize is reduced for the c br ightness gradually changing region, In this manner, deterioration of the picture quality is effectively a v o i ded and v i deo s i gnals are eff i c ient ly transmitted.
In this case, the quantized stepsize is enlarged 6 for a high por t ion of the spat ial frequency whereas the quantized steps ize is reduced for a low port ion of the spat ial frequency.
Thus, in this embodiment, a component which is ha rd to be recognized by human being i s eql i va en t ly 0 en larged in quan I i zed s teps i ze by we i gh ting process ng coe f f icients of which data are output ted from the d i s crete cosine trans f orma t ion circu it 12 according to error data ER outputted from the motion vector de t ecting circu i t 6, and thereby the degradation of S p i c t ure quality is effect ively avoided and video s i gnals are ef f iciently transmit ted.
A requant izing circuit 18 requant izes output data o f the mult ipl icat ion c i r cu i t 14, in which event the quantized stepsize is swi tched according to control -O da t a outputted from the data amount con t ro l c i rcu it Sight of the human being recognizes a display picture having a clear outline or boundary of an object to be good in the picture quality, and hence degradation in picture quality is effectively avoided and video signals are efficiently transmit ted by reducing the quantized st eps ize of the outl ine and the boundary of the object.
Thus, in this embodiment, the quantized stepsize b is switched according to the amount of output data from the discrete cosine transformation circuit 12, the amount of input data from the buffer circuit 21 and error data ER, and thereby the output data of the discrete cosine transformation circuit 12 is requant ized to ref lect the tO quality of the picture. In this manner, deterioration o. of picture quality is effectively avoided and each frame data are transmi t ted at a fixed amount.
A i n An inverse requant izing circuit 22 receives output e* data of the requantizing circuit 18 and carries out I- inverse quantizing proce:sing which is an inverse processing of the requantizing circuit 18 to thereby reconstruct the input data of the requantizing circuit r is An inverse mult ipl icat ion c i rcu i t 24 performs a ,O mu l t i p licat ion operation on the output data of the inverse requant izing circuit 22 inversely to the multiplication circuit 14 to thereby reconstruct input data of the mu t iplicat ion circuit 14.
Inversely to the discrete cosine transformation Scircuit 12, an inverse discrete cosine transformation c ircu i t 26 converts output data of the inverse mult iplication circuit 24, so that the Input data of discrete cosine transformation circuit 12 are reconstructed.
A adding circuit 28 adds the predicted data DPRI outpu tled from the adapt ive prediction circuit 10, to the output data of the inverse discrete cosine transformation c i rcui t 26 and then outputs the resulting tO data to the adaptive prediction c i rcuit Thus, in the adaptive prediction circuit 10, frame data D F which reconstructs the input data of the subtracting circuit 8 can be obtained through the adding circuit 28, and thereby the frame data DF are b se lect ively inputted to set a predicted frame Thus, a selective prediction result is obtained about frame data subsequently inputted to the subtracting circuit 8.
Accordingly, the inputting of frame data reordered IC in the processing sequence enables a selective pred i c t i on r e sult to be detected by sequentially inputting frame data DF in a selective manner in the adapt ive prediction circuit 10, and hence video signals can be transmitted with a simpl e construction.
bIn a run-length Huffman encoding circuit output data of the requant i zing circu it 18 are subjected to Huffman coding processing which is a variable length coding processing and is then outpu ted Sto a transmission data composition circuit 32.
Similarly, a run-length Huffman encoding circuit 34 performs huffman encoding on motion vectors MVN and MVP and then output them to the transmission data compos it ion circuit 32.
tO Synchronously with a frame pulse signal SFp, the transmission data composition c i rcu i t 32 outputs output S data of run-length Huffman encodr ng circui ts 30 and 34, 4e prediction index PINDEX, forward prediction reference index PID backward prediction reference index NID and lb temporary index TR together with control information or Sthe like information of the weighting con t ro c i rcuit 16 and the data amount control circuit 20 in a Spredetermined sequence.
A reordering circuit 33 reorders output data of O the transmission data composition c i rcu i t 32 in the encoding order for each group of frames and then outputs the data reordered to the buffer circuit 21, through which transmission frame data DATA are outputted.
Thus, transmission frame data DATA which are constructed by high eff iciency coding of the input video signal VDIN are obtained, and the recording of the transmission frame data DATA on a compact disc together with a synchron i z ing s i gnal or the i e s i gnal enables deterioration of picture quality to be avoided and provides high dens i ty recording of video signals.
Reordering Circuit As illustrated in FIGS. 4 and 5, the reordering
O
1 circuit 4 operates synchronously with the frame pulse signal SFp (FIG. 5(A)) data DV (FIG. in coding processing and the picture data D V be la pulse signal ST rises pulse signal END rises More specifically provides the start pul C of the counter circu dO value, through an OR c count data COUNT (FIG.
synchronously with the When the count da the decoder circuit 44 and reorders and outputs picture the order of t the interframe ing inputted af (FIG. and (FIG. the reorder in se signal ST to it 40, which in ircuit 42 and t which in he cod ter be intraframe ing processing, the start fore the end g circuit 4 a clear terminal crements its count hereby generates crements its value frame pulse signal Spp.
ta COUNT reaches to a value of activates the clear terminal C I through OR circuits 46 and 42.
Thus, the count data COUNT sequentially circularly changes within a range from 0 to 6 synchronously with the frame pulse signal Spp.
A delay circuit 48 delays the start pulse signal ST for five frame cycles and then outputs it to the clear terminal C of the counter circuit 40 through the OR c i rcuits 46 and 4 2 Thus, when the start pulse signal ST rises, the O c lear t erminal C of the counter circu i t 40 continuously rises for two frame cycles with a delay of five frames cycles, so that count data COUNT having continuous zero e a values are obtained.
When the end pulse signal END rises, the counter Sc ircu i t 40 loads data DL of a value 1, and thereby the count data COUNT sequentially changes from a value 1 to a value 5 by jumping over a value 0 af ter the end pulse signal END rises.
An OR circuit 50 receives the end pulse signal END a and an output signal from the OR circuit 42 and provides an output signal to a flip-flop circuit (F/F) 52.
In response to this output signal, the flip-flop circuit 52 rises in signal level for the leading
I-
two frame cycles of the first group of frames and for the leading one frame cycle of each subsequent group of frames. In this embodiment, the output signal of the flip-flop circuit 52 is used as group of frame index GOF (FIG. According to the count data COUNT, read only memory circuits (ROM) 54, 66 and 58 construct forward prediction reference index PID, backward prediction reference index NID and temporary index TR (FIGS.
0 and respec t ively.
More specifically, the read only memory circuit 54 S .t outputs a forward predict ion reference index PID having a value 0 when the count data COUNT has a value 1, 2 or 3, a forward prediction reference index PID with a S value of 3 when the count data COUNT has a value 1 or and stops to output forward prediction reference Svaindex P 3D when the count data COUNT has a value 4 The read only memory circuit 56 outputs a backward p predi c t i on reference index NID having a value 0 when t-O the count data COUNT has a value 4 or 5, a backward predict ion reference index NID with a value of 3 when the count data COUNT has a value 2 or 3, and stops to output backward prediction reference index NID when the count data COUNT has a value 0,
II
The read only memory circuit 58 outputs a temporary index TR having a value 0, 3, 2, 4, 5 when the count data COUNT has a value 0, 2, 3, 4 respectively Thus, in response to each of the frame data, there are provided forward prediction reference index PID and backward prediction reference index NID, which are referred to in the intraframe coding processing and the interframe coding processing, and temporary index TR representing of frames.
A counte writing to me signal of the I seque r t ial ly More spe in a wr i t ing fourth frame are inputted writing mode inputted.
Simi lar the order of the frame data in the group r circuit 60 controls the timing of mory circuits 61-65 according to an output OR circuit 42 and thereby frame data are loaded in the memory circuits 61 to cif i cal ly, the memory c ircuit 61 is held mode during a period of time that the data B3 B9, of each group of frames whereas the memory circuit 62 is held in a while the second frame data CI, C7 are y, the memory circuits 63, 64 and 65 are the fifth and he d in a wr i t ing mode while the third, the sixth frame data C2, CS C4, C10 Is 1 C11 are inputted, respectively.
The memory circu it 66 is placed in a wri t ing mode at the timing of rising of the start pulse signal ST and hence stores the frame data A0 immediately after the rising of the start pulse signal ST.
A selecting a delayed start delay circuit 48 DST rises, the s \O data AO stored i rminal of the when the delayed selecting circui inputted to the 1Z circu i t The selecti outputted from t circuit 68 is actuated on the basis of from the pu!se signal DST outputted S When the delayed electing circuit 68 n the memory circuit following selecting start pulse signal t 68 directly output reordering circuit 4 ng circuit 70 receiv he selecting circuit the memory circuits sequentially outputs start pulse signal outputs the frame 06 to an input circuit 70 whereas DST falls, the s picture data D
V
to the selecting data, stored in selecti vely and es t 68, 61 t thei data ed i the he frame data, and the frame o 65, and m according to inputted to n the order to inter frame aO count data COUNT, so that the reordering circuit 4 the intraframe coding pro coding processing and are the are cess frame reorder ing and then outputted.
Mot ion Vector Detecting Circuit As i llustrated in FIGS. 6 and 7, the motion vector de e c t ing c i rcu i t 6 processes the picture data DVN, outpu t t ed from the reordering circuit 4, wi th reference to the forward prediction reference index PID, backward "7 prediction reference index NID and temporary index TR (FIGS. and More specifically, in the motion vector detecting circuit 6, read only memory circuits 72 and 73 receive the forward prediction reference index PID and the backward prediction reference index NID and generate s tchir, control data SW1 and SW2 (FIGS, 7(D) and (E) of whic h log i c leve s fa ll when: the forward pred i c t ion reference index PID and the backward prediction reference index NID have a value of 3, respectively.
Ib k read only memory circuit 74 receives the temporary index TR and generates intraframe coding proce:s ing control data PINTRA (FIG. of which logic level rises when the temporary index TR has a value o'f 0 (corresponding to frame data to be intraf ramne coded S mi l arl read only memory c i rcuits 75, 76, 77, 78 and 79 generate interframe coding processing control data WB3 WC1 WC2, \C4 and WC5 of which log i c levels rise when the temporary index TR has a value of 3, 1, _I -I f
P
2, 4 and 5 (corresponding to frame data and C5 to be in terframe coded), respect A delay circuit 80 delays the inte processing control data WC5 and generate control data BON (FIG. 7(G))of which log at a leading frame of each group of fram f irst group of frames.
An OR circuit 82 receives the Inte processing control data WC6 and the intr processing control data PINTRA to genera memory control data WAP (FIG. Thus, the motion vector detecting c operates on the basis of these control d in the read only memory circuits 73-79, circuit 80 and the OR circuit 82.
A blocking circuit 84 receives the D IN) (FIG. 7( J which are sequen t ial synchronously with the frame pulse signa 7(I) to separate each frame data into p r 0 macro unit blocks.
As shown in FIG. 8, each frame data divided in 5 x 2 vertically and horizont display screen to produce 10 groups of b SFIG 8(B B3, C1, C2 vely V e I rframe cod s swi c hin ic level r es except rframe coding aframe coding te a frame Ircuit 6 ata generated the delay picture dat ly inputted I SFP (FIG.
rede t e rmined (FIG. is ally on a lock units C4 ing g ises the o 0
C
1
P
t Furthermore, each group of block units is divided in 3 x 1 vertically and horizontally to produce S3 groups of macro units (FIG, The transmission unit 1 sequentially processes frame data in a group of macro units.
In one group of macro units, picture data of pi xels in column of e ight and row of eight are assigne to one block, and picture data of six blocks in total are assigned to each macro unit.
\O Luminance signals YI, Y 2 Y and Y of 2 x 2 blocks are a llocated to four blocks of the six blocks whereas chrominance signals CR and C B which correspond to the luminance signals Y1, Y 2
Y
3 and Y4 are allocated to the remaining two blocks, \d Tnus, frame data which is divided into 15 x 22 macro unit blocks through the blocking circuit 84 is obtained.
A delay circuit 85 outputs frame data, wh i ch ar e out pu t ed rom the b ocking circu t 64, with a delay o five frame cycles necessary for the motion vector detection processing.
Thus, in the motion vector detecting c i rcu i t 6, picture data D V (OUT) (FIG. are divided in macro unit blocks and outputted synchronously with detection d f of motion vectors.
A delay c i rcu it 86 delays frame group index GOF (IN) (FIG. 7(L) outputs a frame coincides in ti output ted from A backward forward predict memory circuit k0 referred to for More speci memory circuit by five frame cycles and group index GOF (OUT) (FI ming with the picture data the motion vector detect in prediction frame memory I on frame memory c i rcui t 8 90 store respective frame detecting motion vectors.
f ical ly, the backward pred thereby C. which Dv (OUT) g circuit 6.
circuit 88, 9 and interframe data which are iction frame sa e r cc r
I
r 88 is controlled to enter picture data D, into it when intraframe coding processing control data PINTRA rises, and thereby picture data DNV is obtained through the backward prediction frame memory circuit 88. In the picture data DNV, frame data AO is output ted for one frame cycle, then frame data A6 continues for subsequent 6 frame cycles, and frame data A12 cont inues for subsequent 6 frame cycles (FIG.
O 7( The forward prediction frame memory cont rol led to en t e r frame data into i t, being outputted from the backward predic memory circuit 88 when the frame memory circuit 89 is the frame data tion frame control data
IC-
V WAP rises.
By this operation, picture data Dp, is obtained through the forward prediction frame memory circuit 89, the picture data DPV containing frame data AO cont inuing for f irst f ive frame cycl es of s i x frame cycles, in which frame data A6 is outputted from the backward prediction frame memory circuit 88, frame data A6 continuing for subsequent 6 frame cycles, and frame data A12 for subsequent 6 frame cycles (FIG. IO The interframe memory circuit 90 is controlled to receive picture data DVN when the interframe coding processing control data WB3 rises.
By this operation, picture data DINT is obtained through the interframe memory circuit 90, the picture I data DINT having the fourth frame data BS, B9 and each continuing for six frame cycles (FIG. Selection circuits 92 and 93 receive pictur data S D V and DINT DpV and and swi tch their contacts according to swi tching control data SWI and SW2, re spe ct ively.
By this operation, the selection circuits 92 and 93 outputs frame data AO, A6, B3 which are referred to for detect ing motion vectors, to following variable reading memory circuits 94 and 95 by X "s I sequent ia l y swi t ching.
More speci f ically, in detecting mot ion vectors MV3N and MV3P of frame data B3, frame data A6 and AO are outputted to variable reading memory circuits 94 and 95, respect ive ly.
In the processing at the level 2, frame data B3 and AO are output ted to variable reading memory circuits 94 and 96 when mot ion vectors MV1N, MV1P and MV2N, MV2P of frame data C1 and C2 are detected, respect ively; and frame data A6 and B3 are outputted to variable reading memory circuits 94 and 95 when motion vectors MV4N, MV4P and MV5N, MV5P of frame data C4 and S C5 are detected, respectively.
e When the motion vector of frame data Cl is 1 detected within a range of pixels in 8 columns and 8 rows, for example, with reference to reference frame AO, to detect the mot ion vector of the frame data C2 it is necessary to detect it within a range of pixe ls in 16 columns and 16 rows wi th reference to the frame data o AO Similarly, to detect motion vectors of the frame data C4 and C5 with reference to frame data A6, it is necessary to detect them within a range of pixels in 16 columns and 16 rows and pixels in 8 columns and 8 rows, 'L I M respect ively.
Thus, for the processing at the level 2 t i s necessary to detect a moving vector within a maximum range of pixels in 16 columns and 16 rows.
SOn the other hand, to detect the motion vector of the frame data B3 with reference to frame data AO and A6, it is necessary to detect it within a range of pixels in 24 columns and 24 rows, Thus, when frame data is d i v i ded in groups of \O predetermined frames, and when frame data in each group of frames is interframe coded and then transmitted, the mo t ion vector detec ting range becomes too large in the *e S. motion vector detecting circuit 6, and hence it is likely that the mot ion vector detecting circu it 6 becomes complicated.
To avoid this, in this embodiment the motion vectors at the level 2 are firstly detected, and then the mot ion vector detecting range of the frame data B3 is set with reference to the result of the detection.
T h" Thus, the overall s t ructure of the motion vector detecting circuit 6 is simplified More spec i f i ca ll as s h own in IGS 25 and 26, with respect to each frame data C1, C2 from the frame data AO to the frame data B3, motion vectors V V2 V3 are sequential ly detected, and the sum V 1
V
2 of the motion vectors V V 2 and V 3 is de tected.
Then, a mot ion vector detection range of the frame data BS is set with a position, shif ted by the sum vector V 1
V
2
V
3 placed centrally, and a mot ion vector MV3P is detected within the mot ion vector detection range.
In this manner, the mot ion vector MV3P can be detected within a small motion vector detection range.
In this embodiment, the forward prediction and backward prediction motion vectors are detected to detect the motion vectors at the level 2, and motion e e vectors MV1P and MVIN of the frame data CI are detected. Thus, the motion vector MVSP can be detected within a small motion detection range by setting the mot ion vector de t e ct ion range with the posi t ion, offset by the motion vectors MV1P and MVIN, placed centrally.
A selection circuit 96 provides frame data C C2, C4 and C5 which are to be processed at the level 2 to subtract ion circuits KN 0 -KN 2 5 5 and KP 0
-KP
2 5 On the other hand, in the processing at the level S 1, the se lect ion circui t 95 swi tches the contact to provide frame data B3, which is once stored in the in t e r frame memory circu i t 90, to the sub traction circuits KN 0
-KN
2 5 5 and KP 0
-KP
2 5 5 circu it 97.
through a blocking The blocking c into macro unit blo blocking circuit 84 97 provides frame d the sub t raction cir Thus, motion v about frame data Cl O ve ctor is detected The selection ircui cks a and t 97 divi nd output des the f s them as rame in data B3 the circuit thereby the blocking ata BS for every macro unit block to cults KN 0
-KN
2 5 5 and KP -KP266 ectors are sequent ial ly detected C2, C4 and C5, and then a motion about the frame data B3.
circuits 92 and 93 switch their to the motion vector detection tially output frame data B3 and AO, 3, A6 and B3 to variable reading and 95 at the timing of inpu tting of contacts sequen B3 and I9 memory according and sequen A6 and B rcuits 94 frame data C1, C2, C4 and C5 to the motion vector detect ing circuit 6. Then, frame data A6 and AO are outpu tled during the subsequent one frame cycle.
The subtraction circuits KN 0
-KN
2 5 5 and KP 0
-KP
2 5 QO each include 256 x 2 subtraction circuits connected in parallel and sequentially input picture data of the luminance signal which constitutes each macro unit block.
The variable reading memory circuits 04 and output frame data, 'hi ch are inputted through the se lection circuits 92 and 93, in a parallel manner to the subtraction circuits KN g
-KN
2 65 and KP 0 -KP265 according to control data DM output ted from the vector generating circuit 98.
Whe picture to the s the vari picture KP 0
-KP
2 5 pixels (that is detectin n da ub ab da o n g in the processing at the level 2, the first ta of the first macro unit block is inputted traction circuits KN 0
-KN
2 5 5 and KP 0
-KP
2 5 5 le reading memory circuits 94 and 95 output ta to the subtraction circuits KN 0
-KN
2 55 and the picture data being within a range of 16 columns and 16 rows about the picture dat picture data within the motion vector range).
a Similarly, when the second picture data of the first macro unit block is inputted to the subtraction circuits KN 0
-KN
2 5 5 and KP 0
-KP
2 5 5 variable reading memory circuits 94 and 95 outputs picture data, within a range of pixels in 16 columns and 16 rows about the 20 second p i c t ure data from the f rame data of the pred ict i ve f rame, to the sub t raction c i rcu i t s KN -KN255 and KP -KP O 255" In the processing at the level 2, the variable reading memory c i rcui ts 94 and 95 sequent ial ly output Spicture data within the mot ion vector de tect ing range with respect to picture data inputted to the subtraction circuits KN 0
-KN
2 5 5 and KP 0
-KP
2 5 5 Thus, in the level 2 processing, d i f ference data, t which is given in d isplacement of the prediction vector in the mot ion vector detecting range, can be obtained for each picture data of frame data to detect the motion vector through the subtraction circuits KNO-KN255 and KP -KP255 \O On the other hand, in the processing at the level I, the var i ab e reading memory circu i t s 94 and output picture data to the subtraction circuits e S0. KN 0
-KN
2 5 5 and KP 0
-KP
2 5 5 the picture data being within a range of pixels in 16 columns and 16 rows about c picture data displaced a predetermined amount from the picture data, which has been inputted to the Ssubtraction circuits KN -KN255 and KP -KP255 with reference to the results of the detect ion of frame data C1 and C2, C4 and Thus, in the processing at the level 1, di f ference datr which are given in displacement of the predicted frame can be obtained within the mot ion vector detect ing range for each picture data of frame data B3 through the subtract ion circuits KN 0
-KN
2 5 5 and SKP 0 -KP255, the mot ion vector detect ing range being shi fted a predetermined amount.
Absolute value adding circuits 100 and 101 receive subtract ion data of each subtraction circuits KN -KN255 and KP -KP 2 5 5 and detect a sum of the absolute values of the subtract ion data for each of the subtraction circuits KN 0
-KN
2 55 and KP -KP255, and then absolute value adding circuits 100 and 101 outputs the sum of the absolute va lues for each macro unit b ock.
\O Thus, in the level 2 processing, 256 (16 x 16) i fference data is obtained for each macro unit block, the difference data being produced through absolute value adding circuits 100 and 101 when the predicted Sframes are subsequently displaced within the motion a vector detecting range about the macro unit block shifted a predetermined amount.
On the other hand, in the processing at level 1, 256 di f f erence data is obtained for each macro unit S* b lock, the difference data being produced with o o reference to the macro unit block when the predicted f rame is subsequentl y d i splaced within the motion vector detecting range shifted a predetermined amount.
The comparison circuits 102 and 103 receive 256 difference data output ted from the absolute value
I
adding c i rcuits 100 and 101 and output di f f erence data DOON and DOOP of the di f ference data to comparison circuits 105 and 106, the d i f ference data DOON and DOOP being produced when picture data of the predicted frame is ver t ical ly and hor izon tal ly displaced 0 pixel hat is, when the predicted frame is not moved).
Moreover, the comparison circuits 102 and 103 detect and output minimum values among the remaining d i f f e rence data as error data ER(ERN and ERp) as we l Sas de tect pos i t ion informat ion of the minimum difference data.
Thus, the position information to displace the r* predicted frame so as to minimize the difference data can be detected through the comparison circuits 102 and 103, and thereby motion vector can be sequential ly detected about each macro unit block.
The error data ER (ERN and ERp) can be judged that the larger its value is, the more largely the picture changes in each macro unit block.
SThus, it is possible to judge according to the error data ER whether or not the region moved.
The error data ER becomes larger in value at out lines and boundary port ions.
Thus, the nature of the picture can be reflected
_~I
to the requant i zi quant i zing stepsi ng processing by swi tching the ze at the data amount con t rol c i rcu i of error data ER, and thereby video ansmi t ted with e ffect ive y avo i d ed as signa a referenc Is can be t de t e r i oration o f picture qual i ty.
'C)
I b It is frequency i Accord e f f i cien ly deter i orat r- su t of t in the mult result bein con version considered that the higher t s, the larger the error data ingly, video signals can be transmit ted with effec t i v e on of p i cture quality by wei ransformation according to t iplicat ion circuit 1 4, the t g outputted from the discret circuit 12.
he spatial ER becomes.
highly y avoided ghting the he error data ER ransformation e cosine t 0 qu th we th ef r Thus, the nature of the picture can be reflected the requant izing processing by switching the ant izing steps i ze of the requant izing circuit 18 on e basis of the error data ER and by controlling ighting processing of the multiplying circuit 14, and ereby v i deo s ignals can be ransmit ted wi th fectively avoided deterioration of picture quality.
It is possible to detect positional information to ve red i c ted frames so as to minimize di f ference data th reference to a minimum difference data, and hence
I
motion vectors can be sequentially detected with respect to each macro unit block.
The compar ison circui ts 105 and 106 provides outcomes of the comparison between the error data ERN, C ERp and the d i f ference data DOON, DOOP respect ive ly.
During these operations, the comparison circui ts 106 and 106 convert the error data ERN, ERp and the difference data DOON, DOO P to an amount of error and difference per one pixel as represented by the \0 following equations; DOON (DOOP) x (1) 256 ERN (ERp) (2) ""256 and in a range where the amount of error and difference are sma I 0 vector is preferentially selected as the motion vector.
When in a range having a small amount of error and d if e r e nce, d if ference data EN and .EP (FIG. 1) are 46 eg en rated wi th re ference to the mo tion vectors, e tec ted in th comparison circuits 102 and 1 03 the amount of data of the difference data .EN and .EP does not become considerably small as compared to the case S where d i f f erence data .EN and .EP are generated on the basis of 0 vector, and the total amount of data increases because of transmission of the motion vector as significant information.
Thus, in this embodiment, video signa s are as a tO who e e f i c i entl y transmi tted by preferen t ia l ly selecting 0 vector as motion vector in the comparison circuits 105 and 106.
The comparison circuits 105 and 100 swi tch e* Scontacts of the selection circuits 107 and 108 by St 1p outputting switching signals to s elect i vely ou tput 0 vector data MV 0 and the detected motion ve c t ors outputted from the comparison circuits 102 and 103 Sc accord i ng to the precedence of FIG, 9, and thereby Smot ion vectors MViN and MVIP (FIGS. 7(Q) and can 0 be obtained through the selection circuits 107 and 108.
S Mot i on vector memory c i rcuits 110-113 and 114-117 S* enter mot ion vectors MViN and MViP in response to inter rame coding processing control data VC1, WC2, WC4, \W'C5 and thereby input motion vectors MV1 N MV 2N I IIIL, dll~ MVIN, MV5N and hMVIP, MV2P, MV4P, MV6P for forward S predict ion or backward prediction with respect to frame dat a C C2, C4, C5 which are processed at t h e level 2 On the other hand, adding circuits 120-122 and 123-125 rece i ve mot ion vectors MVIN, MV2N, MV4N, and MV1P, MV2P, MV4P, MVSP, which have stored in the motion vector memory circuits 110-113 and 114-117 and output the resu I t of adding of the mot ion vec t ors MV1N, MV1P, MV2N, and MV2P and he result of adding of the Smotion vectors MV4N, MV4P, MV5N and MV5P to halving circuits 127 and 128, respect ive ly.
In this embodiment, firstly motion vectors at the S level 2 are detected, and then wi th reference to the result of the de t e c t ion mot ion ve c tors are detected within a maximum range in 16 columns and 16 rows by previously se t t ing the mot ion vector detecting range of the frame data B3. Thus, the overall s t ructure of the motion vector de tecting circuit 6 is simpli f ied To do so, t h e adding circuits 120-125 and halving 20 rcui ts 127 and 128 obtain 1/2 of the resu lt s of the adding abou t mo t i on vectors MVIN-MV5P, and t he r e by predicted motion vectors MV3NY and MN3PY which are represented by the following equations are produced: a Irac *MV3NY 1/2 (MV1N MV1P) (MV2N MV2P) (3) MV3PY 1/2 (-MV1N MV4P) (-MV5N (4) c n th e p r e d i c t e d m o t i o n v e, c t o r S MV 3 INY and MU PY are o ou tp ut te d t o a dd ing c ir cu it s 1 32 a nd 1 33 th ro u vh s e sele ct ion c ir cu it s 130 a nd 1 31.
T he s elIec t ion c ircu it s 1 30 a-n d 1 31 s w Itc h t he ir c on ta c ts i n r e spo n se to sw it ch ingc, co n tro0l daPt a B ON an d t he re by3 s eIe ctively, output d a ta D ON a-nd D 0 haPv ing a v aIu e o f 0 ab o ut f rame da taF Cl ,C2 04 a nd C 5 t o be p r o c e s s e d i n t h e I e v e 1 2 an d p r t d i c t i v e mo t i on v e c t o r s MV 3NY an d MNI3 PY about f r ame d atia B.3 t o b e pr oc e ss ed inr t he Ile%,e! 1.
On t he ot h er ha n d, a-d d ing c ircau it s 13 2 a nd 13.13 ad d o u p u dataIP MVf3 NY D ON and MVISPY, D OP o f t he s e Ie ct io n c i rc u i t s 1 30 a n d 1 31 t o c o n t r o I d a t a DM o u t p u t t e d I r om t he v ec t or ge ne r a tin g ci1rc u it 9 8 I- -I~II Therefore, the motion vector is detected in the A mot ion vector detect ing region about each macro unit block with re f e r ence to the f r me data Cl, C2, C4, and the motion vector de t ec ting region disp aced by the S pred i c ted mot ion vector MVSNY and MVSPY with reference to the frame data B3.
Accordingly, motion vectors between frame data AO and B3, B3 and A6, which are frames away from each other, can be positively detected within a small mo t i on O vector detection range, and motion vectors can be detected with a simple construction.
The motion vector detection range of the forward S' e prediction motion vector MV3P is set by averaging the sum of the forward prediction and backward prediction 1 motion vectors of the frame data C2, and the motion vector detection range of the forward prediction motion Svector MV3N is set by averaging the sum of the forward p* prediction and backward prediction motion vectors of the frame data CI, C5. Thus, motion vectors can be pos i tive l y d e t e c ted Adding circuits 135 and 136 add predicted motion vectors MV3NY and MN3PY to motion vectors outputted from selection circuits 107 and 108 at the level 1 processing, so that motion vectors MV3P and MV3N are obta i n e d Thus mo t ion vectors MV3N and MV3P be tween frame data far away from each other with a simple construction as A counter counter circu i t signals Spp aft process ing cont 138 outputs mot which seq a0 value 4 Selection their contacts select ion data motion vectors Ib circuits 135 an a whole, cir wh er ro ion uen cui t 138 is composed of quinary ich sequentially counts frame pulse i is cleared by inter frame cod ing data WC5, and the counter circuit vector selection data MVSEL (FIG.
ti ally circulates f rom value 0 to ri Si rcuits 139 and 140 sequentially switch in response to the motion vector MVSEL and thereby selectively output MVSN and MV3P, outputted from the adding d 136, and motion vectors MV1N to o stored in motion vector memory circuits 110 to 117.
Thus, motion vectors MVN and MVP (FIGS. 7(T) and can be sequent i a lly obtained through the motion vector detect ing circu it 6.
Run-i ength-Huffman Encoding Circuit As shown in FIG. 27, the run-length-Huf fman encod ing c i rcu i t 3 4 provides forward pred i c t i on motion vectors MV1P, MV 4 P of frame data CI, C4 and backward predict ion mot ion vectors MV2N, MV5N of frame data C2, L _C 0 that i s mo i on ve c tors wh i ch are de t ec t ed by us ing frame data AO, B A6 as reference frames and here inaf ter referred to as single vector) to a select ion circuit 150.
An adding circuit 151 receives backward prediction motion vectors MV1N, MV4N of frame data Cl, C4 and forward pred i c t ion motion vectors MV2P, MV5P of the frame data C2, CS (that is, motion vectors of frame data which are two frames away from frame data A.O, B3, A AS and here inaf ter ref erred to as double vectors and the adding circuit 151 adds a value of 1 to the motion vec t ors to output when values of the a t ter are pos i t ive whereas it sub tracts a value of -1 from the motion vectors to output when values of the latter are lo negat ive.
A halving circuit 152 receives an output of the adding circuit 151, and the result of the halving from which a remainder is removed is outputted to the selection circuit 150.
That is, the adding circuit 151 and the halving circuit 152 convert the motion vectors MV1N, MVIN, MV2P and MV5P to motion vectors per one frame and output them.
On the other hand, an adding circuit 153 receives mo t i on vec tors MV3P and MV3N o f f r me data B13 t hat i mot ion vectors of f rame data which are three frames away from frame da A 0, A6 and here i naf t e r referred t o as triple vectors), and the adding circuit 153 adds a value of 2 to the motion vectors to output when the values of the lat ter are pos i t ive whereas it subtracts a value of -2 from them to output when the values are negative.
A 1/3 division circuit 154 receives an output from tO the adding circuit 153 and outputs the result of the i/3 division, from which a remainder is removed, to the selection circui t 150 SThat is, the adding circuit 153 and the 1/3 division circuit 154 converts motion vectors MV3P and I MV3N to motion vectors per one frame and outputs them.
In this manner, the mot ion vectors which are inputted to the selection circuit 150 are set to values which are equal to probabilities of their appearance, and thereby each mo tion vector is optimized with ease.
O More spe c i f i ca ly, as shown in FIG. 28 in sequentially cont inuous frames FM, F F2 and F 3 motion vectors V V2 and V, which referred to the frame FM have re lat ionship o f the fol lowing equat ions when the frames FM, F 1
F
2 and F 3 are strongly
I
corre at ed
V
2 2V1 V3 3V1 (6) Accordingly, a motion vector V X of frames which are x frames away from each other is generally represented by the following equa t ion: V x V. (7 1 X ee This will be unders tood from that the probability V of the motion vector V X is expressed by 0t multiply i ng an appearance probabi 1 i ty V1 of the motion vector V 1 by x in the hor i zontal axis when the appearance probability is statistically expressed with the mot ion vector represented by a.
Thus, when the mo t i on vector V is di v ided by x with the remainder removed and is then expressed using o* the value a, i t is understood that: the appearance probabi i ty 1/X 0 VX(a) of the mot ion vector V. is equal to the appearance probabi lity V 1 of the motion vector V
X
and the motion vector VI can be optimized by
__IL
us in the same table.
According to this principle, the run-length-Huffman encoding circuit 34 provides a selected outpu't of the se lect ion ci rcui t 150 to a read only memory 156 and outputs a data DV! Itored in the read only memory 156 by using the selected output as an address.
As i l lus t rated in FIG. 30 the read only memory 166 is designed to output in response to input data iO variable length coded such that the length of codes becomes sequent ially longer with an input data having a value of 0 placed centrally, and thereby moti-on vectors 4 converted per one frame are coded in the optimized manner.
That is, when value of motion vectors are stat i s t ical Ily detec t ed, a mot ion vector having a value of 0 has the highest appearance probability and the appearance probabi i ty becomes smaller as the values of motion vectors becomes larger.
O Thus, in this embodiment, coding has hLen carried out so that the motion vectors having a value of 0 have the shortest code table, and thereby amount of data which is necessary to send motion vectors is, as a while, reduced, so that motion video signals are efficiently transm itted.
Moreover, the read only memory 156 length data DL1 representing a code len output data DV1 together with the data After a remainder output circuit 1 division of the adding circuit 153 by a data of remainder is out put ted to the a memory circuit 162.
As shown in FIG. 31, the read only 162 outputs a remainder data DV2 having a code length 1 in response to an input outputs a code gth of the DV1.
60 performs value 3, the r ad only memory ci a value 0 data havi rcuit with ng a s value 0 whereas th outputs remainder code length 2 for The input dat are remainders to converted per one undergone adding-a adding circuit 153 o pappearance probabi becomes smaller as Accordingly, e read only memory circu it data input a of which frame nd-su Th the the DV2 having values 10 data of values 1 and the read only memory triple vectors have the triple vectors btracting operation i us, the value 0 has t and the appearance p value grows.
162 and 11 with 2.
circuit 162 been having n the he largest r o b a b i it ount of in this embodiment, the am data necessary for sending motion vectors is reduced as a whole by outputting remainder data DV2 having the shor tes t code length, and thereby moving signals are eff i ciently transm itted.
The read only memory circu it 162 out length data DLL2, representing the code l remainder data DV2 synchronously with th picture video puts a code ength of the e remainder data DV2.
A selec t ion circuit 164 synchronously with the se lec and output the least signi i O outputted from the adding ci data DV2.
That is, the selection selective output t ing with re The selection circuit 1 Sleast signi f icant bit inputt vector. Thus, the selection outputs to a parallel-serial swi t ion cant rcui tches its circuit 1 bit of th t 151 and contacts 50 to select e output data remainder circuit 1 spect to 64 output ed in res circuit conversi 64 stops the a single vector.
s data of the ponse to a double 164 selectively on circuit 166 a selected output having a value o f 1 when the double ac vector has an even value whereas it output of a value 0 to the parallel circuit 166 when the double vector a value 0.
The selection circuit 164 outp DV2 in response to a triple vector.
outpu -se r i a has an ts a selected 1 conversion odd value or uts a remainder data.
A se lec t ion c i rcu i t 168 rece i ves input data DLLO having a value 0 and DLLI having value 1 and code length data DLL2, and the select ion circuit 168 outputs code length data DL2 representing a code length of se lected output data DJ output ted from the selec t ion circuit 164.
An adding circuit 170 outputs the result of adding of the code length data DL1 and DL2 to the parallel-serial conversion circuit 166.
o As illustrated in FIG. 32, the parallel-serial couversion circuit 166 adds the output data DJ of the selection circuit 164 and the addition data of the adding circuit 170 to the output data DV1 of the read only memory 156 and then converts the resulting data to a serial data.
Thus, in response to the single vector, the output data DV1, outputted from the read only memory 166, and the code length data DL1 of the output data DV1 are converted to serial data and then outputted through the &O parallel-serial conversion circuit 166.
e a Sn response to the double vector which has an even value, a remainder bit bl having a value 0 is added to the output data DV outputted from the read only memory 156; an addi tion data, having a value 1 added to
II
resultant data is then converted to serial data.
When the double vector has an odd value or a value 0, the rema i nder bit b of a value 1 is added to Lhe ou t put data DV1; an add i t i on data, having a value 1 S added to the code length data DLI is further added; and the resul tant data is then converted to serial data.
In response to the triple vector which has a value of 0 or a value 2(3n 1) (n 0, 1, 2, the remainder bit b of a value 0 is added to the output data DV1; an addition data, having a value 1 added to the code length data DL1, is further added; and the resultant data is then converted to serial data.
When the triple vector has a value ±(3n 2) (n 0, 1, 2, the remainder bits b and b 2 having values 1 and 0, respectively, are added to the output data DV1; an addition data, having a value 2 added to the code length data DL1, is further added; and the resul tant data is then converted to se r i a l data. When a 0 the triple vector has a value Z(3n 3) (n 0, I, 2, the remainder bits b and b2 having values 1 and 1 respec t ive ly, are added to the output data DVI; an addi tion data, having a value 2 added to the code length data DL1, is further added; and the resultant r -b _I data is then converted to serial data.
Thus, on the side of the object to be transmitted it is possible to judge whether or not the data of the motion vector thus variable length coded is a single, a double or a t r i p le vector with reference to forward predict ion reference index PID, backward predict ion reference index NID and temporary index TR, and the motion vector can be decoded on the basis of the result of the judgement 0° Thus, single, double and triple vec'ors can be variab ie coded with a preference to a v e c tor having the h highest appearance probability by using a kind of table stored in the read only memory 1 66, and thereby motion e"c* vectors can be optimized with a simple construction.
i The motion vectors can be transmit ted by such cod i ng processing wi th a detected accuracy maintained, and video signals can be efficiently sent with degradation in picture quality being e f f ec ively avoided.
a v o i d e d A Adapt ive Prediction Circui t As illustrated in FIG. 10, an adaptive prediction circuit 10 selectively predict frame data B3, Cl, C2, C4, and C5 with reference .to the forward pred i ct ion reference index PID, backward prediction reference I Snd ex NID and temporary index TR.
Mo r e s p e c i f i ca.l in adapt ive predict ion c i rcu t read only memory circuits 142, 143 and 144 rece i v e the temporary index TR as shown in FIG. 11 to generate intraframe coding processing control data P .NTRA (FIG.11(A)), interframe coding process ing control data WB3 and WCS, respectively.
Read only memory circuits 146 and 147 receive forward predict ion re ference index PID and backward predi t ion reference index NID to generate switching control data SW3 and SW4 (FIG. 11(B) and of which logic leve s fall when the values of forward predi c tion reference index PID and backward prediction reference index NID are 0.
An OR circuit 148 receives intraframe coding .b processing control data PINTRA and interframe coding processing control data WC5 to produce frame memory control data WAP.
Thus the adaptive prediction circuit 10 is designed to operate on the basis of control data generated in the read only memory circuits 142 to 147 a* and the OR circuit 148, A mean value memory circuit 150 receives p i c ture data DVN (FIG. wh i c h is o u t pu t t e d from the S o:tion vector detecting circuit 6 synchronously with the frame pulse signal SFp (FIG. to obtain mean value of picture data of luminance signals and chrominance signals for each macro unit block, and then the mean value data are outputted to the transmission S data compos ition circuit 32 (FIG. 3) as d i rect current data DC.
In addi t ion, the mear value memory circuit 150 outputs direct current data DC of frame data AO, A6, as predicted data DpR I to the subtracting O ci rcuit 8 (FIG. 3) through selecting circuit 152 at the timing of inputting frame data AO, A6, to be intraframe processed to the subtracting circuit 8.
Thus, dif ference data D z from the mean value of the picture data DVN can be obtained about frame data l 5 AO A6, through the subtracting circuit 8, and after subsequen t ly data compressed through the discrete Scosine transformation circu it 1 2 the multiplication circuit 14, the requant izing circuit 18 and the run-length Huffman encoding circuit 30, the difference 0O data D is outputted to the transmission data composition circuit 32.
On the other hand, a backward prediction frame memory circuit 154, forward prediction frame memory t 1 5 and iame memory c r t 1 6 r circuit 155 and interframe memory circuit 156 receive I picture data Dp (FI0. reconstructed in the adding c i r cuit 28, and store frame data of pred i c t ed frames which serve as references of the backward and forward prediction.
That is, the backward prediction frame memory circuit 154 enters picture data DF into it when intra f r ame coding processing con t rol data PINTRA rises.
Thus, through the backward prediction frame memory circuit 154, there can be provided picture data DNV
F
O (FIG. 11(G)) in which after frame data SAO which is reconstructed for one frame cycle is outputted, frame data SA6 similarly reconstructed continues for subsequent 6 frame cycles, and then frame data SA12 reconstructed lasts for subsequent 12 frame cycles.
On the other hand, the forward prediction frame Smemory circuit 155 enters frame data which is outputted from the backward prediction frame memory circuit 154 when the frame memory control data WAP rises.
Toc Thus, through the forward prediction frame memory circui t 1 55, there can be provided picture data DPVF (FIG. 11(H)) in which. the reconstructed frame data SAO lasts for the f irs t 5 frame cycles among 6 frame cy. cyc es, during which reconstruct ed frame data SA6 i s out put ted from the backward prediction frame memory L L, circuit 151. The reconstructed frame data SAG continues for subsequent 6 frame cycles, and then the frame data SA12 reconstructed lasts for subsequent 12 frame cycles.
The interframe memory circuit 156 enters picture data DF into it when interframe coding control data WB3 rises.
In this manner, picture data DINTF (FIG. 11(1)) is obtained through the interf rame memory circuit 156, the Spicture data DINTF having reconstructed fourth frame a SB3, SB9 and SB15 each lasting for 6 frame cycles.
Selection circuits 158 and 159 receive picture data DNVF, DINTF and DpvF DINTF and switch their contacts according to switching control data SW4 and SW3, so that frame data SAO, SA6, SB3, which are S. referred to for forward and backward predict ion are sequent ia lly output ted to following variable reading memor c ircuits 160 and 161.
That is, the selection circuits 158 and 159 output 0 reconstructed frame data SA6 and SA0 to the variable r(ading memory circuits 160 and 161 at the timing of e input t ing the fourth frame data B3 of the group of frames to the adapt ive prediction circuit Then, the selection circuits 158 and 159 output 11I recons truc ted f reading memory inputt ing the s of the group of circuit 10 wher output reconstr timing of .input data C4 and Variable r inpu t t ed f r ame MVP de tected in and then output Thus, the rame data SB3 and SA0 to the variable circuits 160 and 161 at the timing of econd and the third frame data Cl and C2 frames to the adapt ive pred ic eas the selection circuits 158 uctcd frame data SAO and SB3 a ting of the fourth and the f i f tion and 159 t the th frame eading memory circuits 1 data by amount of motion the motion vector detec 60 and 161 shift vectors MVN and ting circuit 6 s them to backward a selection circuit 163.
forward prediction frame rediction frame data FN and data FP (FIG. 1) can be able reading memory circuits the reconstructed frame data on vector MVN and MVP and by obtained 160 and by a dis outputt On o frame da through 161 by di tance of ng them, the other ta output the vari splacing the moti respecti hand, a ted from v e y.
n add the circuits 16 circuit 163 Thus, FNP (FIG. 1 0 and then outputs through the halvin interpolative predi which linearly in ing circuit 164 adds variable reading memory them to the selection g circuit 166.
ct ed result frame data terpolates backward I
_I~
predicted result frame data FN and forward result frame data FP can be obtained through the halving circuit 165.
Subtracting circuits 165, 166 and 167 subtract frame data, outputted from the variable reading memory 160, the variable reading memory 161 and the halving c i rcu i t 16.5 from frame data which are picture da ta DVN respectively.
Thus, the backward prediction di f f erence data FN, tO the forward prediction difference data AFP and the interpolative prediction difference data AFNP (FIG 1) can be obtained for each macro unit block through subtracting circuits 165, 166 and 167, respect i ve ly.
s Absolute value adding circuits 168, 169 and 170 l6 change difference data, output ted from the subtracting circuits 165, 166 and 167, to absolute values, which are accumulated for each macro unit block and then outputted.
The backward prediction difference data AFN, the ao forward pred iction di f ference data .FP and the :t interpola t i ve pred i ct ion dif f erence data AFNP (FIG. 1) can be detected in amount of data through absolute value tdding c i rcuits 168, 169 and 170, respe c t i ve l y A comparison circuit 71 receives the sum of A comparison circui t 171 rece v es the sum of
I
absolute value of and FNP to de t ec each of the di f f erence data A FN, AFP t a minimum value thereof.
In addition, the comparison circuit 171 ou tputs a control signal thereby backwa forward predic predicted resu data A FN, AFP selected and o T° Thus, in values of fram predicted data through the se intraframe cod to the se lecti on c i rcu i t 163, and rd predicted result frame data FN, ted resu t f r ame data FP or in t e rpo lat ive It frame data FNP of which difference or A FNP is a minimum in amount of data is utputted to the selecting circuit 152.
the intraframe coding processing, mean e data AO and A6 are outputted as DPRI to the subtracting circuit 8 Secting circuit 152 whereas in the ing processing, frame data FN, FP or FNP, of which di f ference data AFN, AFP or AFNP is a minimum in the amount of data, i s selected for each macro unit block as predicted data DPRI and is outputted to the subtracting circuit 8.
Thus, the di f ference data D between the Sselec t i ve l y pred i c ted b a c kwa r d predi c t ed resu t f rame data FN, forward predicted result frame data FP, interpolative predicted result frame data FNP and frame data B3, C1, C2, to be encoded can be obta ined through the subtracting circuit S The difference data II _I I to D is sequentially data compressed through discrete cos i n e ransformat ion c i r cu i t 12, mu t ipl icat ion circuit 14 requant izing circuit 1 and run-length Hu fman encod ing c i rcui t 30 and is then output ted to the transmission data composition circuit 32.
A select ion circuit 172 is control led by a comparison circuit 171 to switch its contact, and thereby di f f erence data AI NTRA wh ich is the smallest the amount of data is se lected from the difference data AFN, AFP and AFNP and outputted to a comparison cir uit 174.
in A subtracting circuit 176 receives picture data r ~o o o r e DVN and direct current data DC and outputs the difference between them to an absolute value adding circuit 177.
Similarly with the absolute value adding circuits 168 to 170, an absolute value adding circuit 177 accumulates absolute values of input ted data for each macro unit block and then outputs the accumulated ,0 sum AINTER to the comparison circuit 174.
The comparison circuit 174 outputs a switching s ignal to each macro un i t block on the basis of the result of comparison between the accumulated sum AINTER and the d i f f erence data A INTRA.
Y
LI- I- Y An OR c i r cu i t 178 rece ives the swi tching s ignal o u t pu t ed f r om t he comparison c i rcuit 17 4, and he in t raframe coding processing con trol data P[NTRA to control the contacts of the select ing circuit 152 to swi tch.
When in frame data B3, Cl, C2, C4 and C5 which are assigned to be interframe coded, there is a macro unit b lock which may be sent in a small er amount of data as a whole by intraframe coding processing, the comparison S ci r cu i t 174 outputs a switching s i gnal to the se I ec t i ng ci rcuit 152 through the OR circui't 178 according to the result of the comparison between the accumulated sum AlNTER and the difference data AINTRA so that the n. intraframe coding processing is selected for the macro uni t block.
That is, the accumulated sum AINTER is accumulated for each macro unit block after the difference data between the p i c ture data DVN and the d i rect current data DC is changed to an absolute value and hence t'e e St) accumulated sum AINTER represents an amount of data Swhen frame data B3, Cl, C2, C4 and C5 assigned to be inter frame coded are intraframe coded, Thus, it is possible to judge by obtaining the result of comparison between the accumulated sum AINTER L 1 and th e di er nce dat a I NTRA whether or not the intraf rame coding processing of each macro unit block prov i des a sma ler amount of aata to be transmitted.
Even frame data B3, C1, C2, C4 and C5 which are assigned to be interframe coded may be sent in a smal ler amount of data on basis of that result of comparison as a whole by intraframe coding processing macro unit blocks thereof.
As shown in FIG. 12, the selecting c i rcuit 152 \O selects and outputs direct current data DC when in :rame data B3, Cl, C2, C4 and CS which are assigned to be interframe coded, there is a macro unit block which may be sent in a smaller amount of data as a whole by S. tintraframe coding processing. Thus, transmission frame p' i picture data of the macro unit block which has been n intraframe coded is transmitted to a destination of e transmission.
In this operation, the comparison circuit 174 preferentially selects the intraframe coding processing wi thin a range where the amount of data of each of the accumulated sum A INTER and the difference data AINTRA is small, and thereby error transmission can be eff ec t i vel y avoided and a high quality video signal can be transmitted.
LI
The transmission of video signals interf rame coded has a problem in that error transmission cannot be avoided when a transmission error is generated in frame data which are referred to for the in t e r f rame coding S process in Accordingly, even frame data B3, C1, C2, C4 and which are assigned to be interframe coded are preferentially intraframe coded for transmission not only when a small amount of data as a whole is transmi t t ed by the intraframe coding processing as described but also when a small amount of data is provided both the intraframe coding processing and the interframe coding processing, thereby increase of amount of data and error transmission can be l effectively avoided and a high quality video signal can be transmit ted.
o.
A selection circuit 180 receives and select ively ou t pu t s the ou t put data (which is an ident if icat ion data hav ing one of values 1, 2 and 3 rep resent ing th't S. backward prediction, the forward predi ii r the e interpolative prediction, respect i vely) o; ::e comparison circuit 171 and identif icat oi index PINDEX (which in this case is an identification dea a of a value 0) ind i ca t i ng a macro unit block intra :rame c d according to the output signal of the OR circuit 178, and thereby identification data PINDEX represent ing the selectively predicted prediction result can be obtained through the selection circuit 180.
Transmission Data Composition Circuit Synchronously with the frame pulse signal Spp, the transmission data composition c ircu it 32 outputs output data of the run-length Hu fman encoding circuits 30 and 4 the predic t ion index PINDEX, the forward prediction re ference index PID, the backward predict ion reference index NID, temporary index TR, frame group index GOF, control information of weighting control circuit 16 and data amount control circuit 20 to the reordering circuit 33 in a predetermined format and thereby I transmission frame data DATA is constructed.
That is, as shown in FIGS. 13 and 14, the transmission data composition circuit 32 adds a macro un i header HM to picture data which is outputted in unit of a macro unit block from the run-length Huf fman S encoding c i rcuit 30 (FIG. 13(C)).
Wi th r s pe ct to the frame data ntraf rame coded, a predictive index PI (which is produced with reference to the ident if icat ion data PINDEX) representing the in t raf rame coding processing, the backward prediction L- -Iprocess i ng, the forward pred ict ion processing or the interpolat i ve predict ion processing is added to the macro unit header HM to follow a header TYPE for identifying each macro unit block (FIG. 14(A)).
SIn addition, according to the control informat ion of the data amount control c ircuit 20, data QUANT which represents quant ized stepsize of each macro unit block is added and then mot ion vector data MVD-P and MVD-N which re p r esent the forward prediction motion vector O1 and backward prediction motion vector, respect vely, are added.
With respect to luminance signals Y 1
Y
2
Y
3
Y
and chrominance signals CR, C B assigned to a macro unit block, additional data CBP representing whether those I signals have or not data to be transmi t ted, is added.
St: On the other hand, in the macro unit blocks of frame data to be in t e rf rame coded (FIG. the header TYPE for ident i fying each macro unit block is f ol lowed by the luminance signal detected in the a adaptive prediction circuit 10, DC level data DCM-Y, e DCM-U and DCM-V (DC) of the chrominance signal, and then da ta QUANT represent ing a quant i zed steps i ze i s added.
Thus, each macro unit block can be decoded on the I I S basis o f the macro un i t header HM by ad d ing a macro un i t head e r HM f or each macro un it b oc k On the other hand, a group of block units (FIG.
13 is constructed by placing macro unit blocks in 3 columns and 11 rows, and as illustrated in FIG, 15, a group-of-block-un i t header HGOB is added at the head of each group of block units.
The group-of-block-unit header HGOB includes an ident i f ica t ion header GBSC representing the start of I' each group of block units, followed by iden t i f ica t i on header GN for identifying the group of block units.
Then, a frame of transmission frame data is cons t ructed by assembling groups of block un its in columns and 2 rows (FIG. and a picture header I PH is added at the head of each transmission frame a t a data.
As shown in FIG. 16, in the picture header PH a start index PSC representing the head of each group of frame is added with reference to a frame group index GOF ou t put ted from the motion vector detect ing circuit e S 6, and subsequently a current index CID representing tne sequence of frame data in each group of frames is eaded with reference to the temporary index TR.
Moreover, a mode index PM for identifying the ~IC~I_ C_ intraf rame coding processing, the interf rame coding processing at the level 1 or the interframe coding process ing at the level 2 is added, and then forward predict ion reference index PID and backward prediction reference index NID are added.
Thus, for each transmission frame data a mode index PM, ident i f ying the intraframe coding processing, the interframe coding processing at the level 1 or the inter frame coding processing at the level 2, is added as well as the forward predic t ion reference index PID representing frame data for forward prediction and backward prediction, and the backward prediction reference index NID. Thus, the transmission frame data S are easily decoded with reference to forward prediction i' reference index PID, backward prediction reference index NID and mode index PM.
In this manner, the receiving unit not only decodes the transmission frame data with ease but also easily decodes it even when it is transmi t t ed in a aO format d i ferent in the length of group of fr ame s, o processed frames at levels 1 and 2, e t c, from the format of this embodiment. Thus, the moving p icture transmission system is as a whole enhanced in operability, and high quality video signals can be transmitted with ease.
Structure of the Rece iving Unit In FIG. 17, 200 general ly des ignates a receiving unit, and reconstructed data DPB which are obtained by reproducing a compact disc are rece i ved by a receiving circuit 201.
The receiving circuit 201 de t ec t s the head of eac group of frames with reference to t he s tar t index PSC and then outputs the resul t of the de t ect ion together with the picture data DYP
B
As shown in FIG. 18, a reorder ing circuit 203 is provided by this operation wi th a p i c ture data DVPB (FIG. 18(A)) having continuous frame data PAO, PB3, PC1, PC2 sequentially in traframe coded or I interf rame coded.
The reordering circuit 203 ou tputs transmission frame data PB3, PC1, PC2 inter frame coded with a delay of 7 frame cycles. Thus, the reordering circui t 203 reorders frame data PAO, PB3, PC1, PC2 in the O sequence of the intraframe coding processing and interf rame coding processing performed in the transmi t t ing unit 1 (that i s the same sequence as a sequence of decoding) and outputs them (FIG. 18(B)).
A buff er c i rcu i t 204 stores the picture data DVPB h r a
N
d S out put t ed rom the reo e r r ing ci rcui t 203 and then outputs it to a subsequent separation circuit 206 at a predetermined transmission rate.
The separation circuit 206 reconstructs the frame group index GOF, the forward prediction reference index PID, the backward predict ion reference index NID, the temporary index TR, the prediction index PINDEX, the data DC (DCM-Y, DCM-U, DCM-V), QUANT, the motion vector data MVD-P and MVD-N with reference to the picture 0) header PI, the group of block unit header HGOB and the macro unit header HM, and then the separation circuit 206 outpu t s them to predetermined c i rcu i t s.
The separation circuit 206 outputs the picture header PI, group-of-block-unit header HGOB and macro Sn I t unit header HM to a con t rol c i rcuit 207 at th i s t ime S so that the control circuit 207 obtains reconstructed data, hav ing continuous frame data for each group of frames, by controlling a compact disc drive reproducing system C That is, in normal reproduction mode, data which are sequen t ial ly recorded on the compact disc are, as S* described in connection with FIG. 1S, reproduced to obtain the picture data DPBN (FIG. 19(B)).
In reverse reproduction mode, an opt ical pickup is
~I
L I I I s moved in a di re c t ion reverse to a d ire c t ion in the normal reproduction mode while the compact disc being rotated in the same direction as in the normal reproduc t ion. Thus, a picture data DVPBN wh ich arranges groups of f rames in an order reverse to the order in the normal reproduction, is obtained (FIC.
19(A)) In recordi is inputted to the rece first group of iving unit 200 frames and then the S S \O second group of frames (PA6-PC11) and the third group of frames (PA12-PC17) are continuously inputted whereas in the reverse reproduction, the third group of frames (PA12-PC17) is inpu t ted and is followed by the second group of frames (PA6-PC11) and the first group of f frames Since the reordering c i rcuit 203 delays frame data interframe coded 7 frame cycles, the frame data PA6 is delayed by 6 frame cycles from the frame data PA12, and then subsequent frame data (PB15-PC17) follow the frame
A
O data PA12, and subsequent frame data (PB9-PC11) fol low the frame data PAO and frame data PA6 (FIG. 19(B)).
Also in the re normal reproduction through the reorder verse mode, ing ci reproduction mode as in the frame data are arranged rcuit 203 so that continuous 1' 1 I S-ame da ta intraf rame coded are continuously fo lowed by frame data processed at the levels 1 and 2, and then by r ame data in t ra rame coded.
Thus, in this embodiment, the frame group index COF, t he forward predi c t i on re f erence index PID, t he backward prediction reference index NID, the temporary index TR etc. are added to each of the frame data and i t ransmi t ted, and hence transmission frame data can be eas i l decoded also in ne reverse reproduction as in the normal reproduction by subsequently decoding them in the subsequent run-length HuI man inverse coding circuit 210, inverse requantizat ion circuit 211, inverse multiplying circuit 212, discrete cosine inverse transformation circuit 213 and predict ion tS circu i t 214 wi t h reference to these indexes.
The separation circuit 206 removes picture header PI, roup-of-L ioclk-unit header HGOB and macro unit header HIM from the picture data D.PBN and then outputs eePBN hem to the run-length Hu ffman inverse coding circuit 210 The run-length Huffman inverse coding circuit 210 performs a processing inverse to the processing of run-len th Hu fman coding circuit 0S (FIC. so that the inputted data of the run-length Huf man coding I' L_ r I_-JI L L-M O circuit 30 are reproduced in the receiving unit 200.
An inverse requantizat ion circu i t 211 receives output data of the run-length Huffman inverse coding ci rcui t 210 and data QUANT, representing the quan t i zed S steps ize added to each of the macro unit header HM, and performs inverse requant izing processing as in the inverse requantizing circuit 22 (FIG. 3) to thereby reproduce the input data of the requant i z ing circuit 18 in the receiving unit 200, the inverse requan i z ing 0 processing performing inverse to the processing of the requantizing circuit 18.
On the other hand, the inverse multiplying circuit 212 receives output data of the inverse requant izat ion circuit 211 and performs inverse multiplication operat ion inverse to the opera tion of the multiplying c ircu it 14 (FIG. 3) with reference to data added to the each macro unit header HM to thereby reconstruct input data of the multiplying circuit 14 in the receiving unit 200.
O The discrete cosine inverse transforma t ion circuit S213 performs inverse transformation on output data of the i nverse mu tipl ying circuit 212, the inverse t r ans format ion being inverse to the t rans f orma t i on of the discrete cosine transformation circuit 12 (FIG. 3) *oo L1 h sg AM Thus, input data o the discrete cosine transf ormat ion circu it 12 is recons t ructed.
The adding c ircui t 218 adds predicted data DPRI output ted from the adaptive prediction c i rcu it 214, to S output data of the discrete cosine inverse trans format ion circuit 213 and outputs i t to the adaptive prediction circuit 214.
The run-length Huf fman inverse coding c ircui t 220 decodes the forward prediction motion vector MVP and IC the backward prediction motion vector MVN which have been variable length coded in the run-length Huffman coding circuit 34 of the transmitting unit 1 and outputs them to the adaptive prediction ci rcui t 214.
The adaptive prediction circuit 214 reconstructs predicted data D output i d from the adaptive
S.,.PRI
prediction circuit 10 of the transmi t t ing uni t 1, with e f reference to output data D T N of the adding circuit 218 and motion vectors MVP and MVN, etc In this manner, the original frame data ~O transmitted can be reconstructed, and hence video data D can be reconstructed, through the adapt ive prediction circuit 214 That is, the adap t i ve pred i c t ion c ircu it 214 outputs a direct current level data DC to the adding circuit 21S as a prediction data DPRI with respect to frame data A A6 in t raframe coded.
Thus, the frame data AO, A6 intraframe coded are reconstructed through the adding circuit 218.
Similarly with the adaptive predict ion circuit of the transm itting unit, the adaptive prediction circuit 214 includes a forward prediction frame memory circuit, backward prediction frame memory circuit and an interframe memory circuit and stores frame data AO, IO A6 reconstructed in the forward frame memory circuit and the backward prediction frame memory circuit (FIGS.
18(C) and to generate a predicted data of the frame data B3.
Thus, the frame data B3 interframe coded at the level 1 can be reconstructed through the adding circuit 218.
Moreover, the adaptive prediction circui t 214 s:ores the frame data BS reconstructed in the interframe memory circuit (FIG. 18(E)) to produce frame data Dp of the frame data Cl, C2, C4 and C5, and thereby the frame data C1, C2, C4 and CS interframe coded at the level 2 can be reconstructed through the adding circuit 218.
In add i t ion the adapt i ve pred i c t i on c i rcuit 214 L L I -I rearranges and outputs reconstructed frame data AO, A6, B3, in the original order (FIG. 1 The re circuit (no video signa operation w Thus, coded and r (2-8) As sho circuit 214 reference t PID, the ba ceiving unit t shown) and l VDIN by a t i th reference video signals ecorded on a Adaptive Pred wn in FIG. 20 constructs p o the forward ckward predic 200 includes reconstructs echnique of to frame da which are h compact disc ict ion Circu the adapti an interpol the origina n t erpolat iv a reconstru ghly eff ici are reconst ating I input e c ted.
ent ly ructed.
redict ed predict tion ref dat ion eren it ve prediction a DPRI with reference ind ce index NID, ex the r temporary index TR and which have been separat 206.
the di ed in t current level data DC separation circuit 214 sel More speci f provides the ect ion c i rcui ical ly, direct t 230 wh the adap current ich swit tive Seve che s pred i ct ion 1 data DC to its contact circuit a with f reference to the decoded ident i f ication data PINDEX ident i f icat ion data of a macr undergone the backward predict forward prediction processing, prediction processing and the o un ion the inte it block which has processing, the interpolative rframe coding o Sprocessing). The adaptive prediction circuit 214 outputs the direct current level data DC to the adding circu i t 218 at the timing of inpu t t ing frame data of intraframe cooed macro unit block to the adding circuit S 218.
That is, the direct current level data DC is subsequently outputted as the predicted data DPRI in unit of a macro unit block for each of intraframe coded frame data PAO, PA6,..
In addition, with respect to the macro unit block which has been preferentially intraframe coded in spite of assignment of the interframe coding processing, the direct current level data DC is outputted to the adding circuit 218.
S: Thus,' with respect to intraframe coded frame data PAO, PA6, and macro unit blocks, on which the 66 *e i intraframe coding processing has been preferential ly selected although the inter frame coding processing has been assigned, the original data can be reconstructed a. by adding output data of the discrete cosine inverse tran s f o rmation c i rcu i t 213 and the pred i c ted data DPR
I
S: through adding circuit 218 6 The adaptive prediction circuit 214 provides the output data DT[ N thus reconstructed in the adding T I N circuit 21 8 to a backward prediction frame circuit 232 and a forward prediction frame circuit 234 and reconstructs predicted dat subsequent frame data.
backward prediction frame memory and the forward prediction frame memory ci switched to writing mode with reference to intraframe coding processing control data the frame memory control data WAP, respect K) thereby the leading frame data AO of the f memory memory a DPRI of circuit 232 rcui t 234 are the PINTRA and ively, and rame group in the reconstructed frame data is stored in the forward eeel eeee *e*e prediction frame memory circuit 234 and A6 of the subsequent group of frames is backward prediction frame memory circui I and The selection circuits 236 and 238 contacts in response to switching signa produced with reference to intraframe c control data PINTRA, respect ively, and .0 f rame data, stored in the backward pred memory ci rcui t 232 and forward predicti circuit 234, as backward prediction and prediction frame data to subsequent var memory circuits 240 and 242, respective the frame data stored in the t 232 (FIGS, 18 switch their Is SEL3 and SEL4 oding processing thereby output ict ion frame on frame memory forward iable reading ly a ighS The variab e reading memory circuits 240 and 242 re ce i ve mo t ion vectors MVN and MVP for each macro un i t block through the selection circuits 244 and 246 and shi f t backward prediction frame data and forward prediction frame data by amount of motion vectors MVN and MVP, respectively.
Thus, the frame data of the results of the backward prediction and the forward prediction can be obtained about frame data B3 and B9 to be interframe \O coded at the level 1 through the variable reading memory circuits 240 and 242, respectively, and the frame data obtained are outputted to the selection circuit 230.
An adding circuit 248 adds frame data outputted from the variable reading memory circuits 240 and 242 P. R and outputs the added frame data to the selection circuit 230 through a halving circuit 250.
Thus, in the selection circuit 230 the direct current level DC with respect to i n traf rame coded macro unit blocks of frame data B3 and B9 is inputted to the aO f i rs t i npu t te rmina l 0 wh i I e frame data of the resul ts of the backward predict ion, the interpolative pred i c t i on and the forward prediction are inpu t t ed t o the second input terminal 1, the third input terminal San*d t*e fouh i t trna 3, r and the fourth input terminal 3, respectively.
QIIL:
Thus, with respect to the frame data of B3 and B9, as s i gn e d to the level 1 process i ng, pr d i c ted data DPR can be reconstructed by selectively output t ing the input data of the first to the fourth input terminals 0-3 on the basis of the identif i at ion data PINDEX in the selection circuit 230.
Accordingly, frame data PB3, PB9, which are sent subsequently to the frame data PAO and PA6 is decoded by adding the predicted data DpR outpu t t ed to the \O adding circuit 218, to the output data of the discrete cosine inverse transformation circuit 213, so that the original data can be reconstructed.
An interframe memory 252 receives output data DTIN of the adding circuit 218 on the basis of the I interframe coding processing control data WB3, and among the frame data reconstructed in the interframe memory 252, frame data B3 and B9 processed at the level I are thereby stored.
Thus, similarly with recording, frame data B3, uQ which are predicted frames of the frame data C C2, C4 and C5 can be obtained through the interframe memory 252 during a period that the frame data Cl, C2, C4 and to be processed at the level 2 last (FIG, 1S(E)).
Thus, frame data B3 and AO are outputted to the ct o o variab e the se e data of forward S be obtai 242 and Ace C1 and C c i rcuit I0 be recon On output te and 242 during a data C1 reading memory c i rcui ts 240 and 24 2 through c t i on c i rcu i ts 23 6 and 238 and t he reby f rame the results of the backward pred i c t i on, the prediction and the interpola t i v e pred i ct ion can ned through the variable reading memories 240, the halving circuit 250, respect i vely ordingly, predicted data DpRI about frame data 2 can be reconstructed through the selection 230, and thereby the frame date Cl and C2 can structed in the adding circuit 218.
the other hand, frame data A6 and B3 are d to the variable reading memory circuits 240 through the selection circuits 236 and 238 r period and C2, of two frame cycle s fo llowing the results the frame and frame data of the the backward prediction, the forward prediction and interpolative prediction can be obtained through the v a r a b e reading memory circuits 240, 2 42 and halving circuit 250.
,U Consequently, predicted data DPRE about the frame data C4 and C5 can be reconstructed through the selection circu i t 230, and thereby the f rame data C4 and C5 can be reconstructed in the adding circuit 218.
Thus, frame data subsequently reconstructed are II out put ted as added data DTIN from the adding c i rcui t 218.
The output data of a delay circuit 262 is d irectly inputted to selection circuit 264 and is also input ted to the selection circuit 264 through the delay circuit 266.
Moreover, the selection circuits 200 and 264 switch their contacts according to switching s ignal SEL2 and output their selected outputs to a selection i0 circuit 268.
The selection circu it 268 receives frame data, outputted from the forward prediction frame memory circuit 234 and the interframe memory circuit 252, other than the selected outputs of the selection I\ circuits 260 and 264 and switches its contact according to the switching signal SELl.
The switching signal SELl and SEL2 are generated according to the current index CID, added to each frame data and transmitted, and thereby frame data decoded a/O are rearranged in the original order to reconstruct video data DV (FI 1 8 F)) Thus, frame data are sequentially intraframe coded and interframe coded in a state divided into predetermined groups of frames and then are r r I- L L transmitted, so that video signals can be efficiently transmitted with effectively avoided degradation of picture quality.
In this embodiment, the motion vectors MVN and M' are outputted through the selection circuits 244 and 246, and thus, in reverse reproduction motion vectors MVN and MVP are swi tched and output ted to the variabl reading memory ci rcuits 240 and 242.
In the reordering circuit 203 frame data I0 in t e r frame coded are delayed for 7 frame cyc es, so that in reverse reproduction, frame data PA6 is delay
VP
e ed r sc r r
I
~a r r r rr 6 frame cycles relative to the frame d followed by frame data PB15-PC17, PAO, Thus, at the timing of inputting I PB13, PB9 and PBS, which are the resul processing, to the adding circuit 208, and AO are stored in the backward pred memory circu it 232 and frame data Al 2, stored in the forward prediction f rame aO 234 (FIG. 19(C) and That is, frame data are stored in prediction frame memory circuit 232 an prediction frame memory circuit 234 in to exchange positions of frame data in ata PA12 and is and PBS-PC11.
the frame data ts of the level frame data A6 iction frame A6 and AO are memory circuit d the backward the forward such a manner as the case of 1- I 1- normal reproduction.
Thus, in reverse reproduction, on the contrary to the normal reproduction, frame data the resul ts of the forward prediction and the backward prediction can be outputted from the variable reading memory circuits 240 and 242 by switching and outputting the motion vectors MVN and MVP to the variable reading memory 240 and 242, respect ively.
Thus, in response to the switching of the motion vectors MVN and MVP, the swi tching operation of the selection circuit 230 is exchanged in the forward prediction and the backward prediction, and thereby the reverse reproduction can be carried out with a simple structure.
I More specifically, since in transmitting frame data, data indicating the order of the predicted frames of the forward prediction, the backward prediction and the order in the group of frames are added and sent, also in the reverse reproduction the transmission frame S" 0 data can be easily decoded as in the normal reproduction.
e At the timing of inputting of frame data Cl, C2, C4 and C5 to be processed at the l vel 2, predicted frames are stored in the backward prediction frame
L_
memory circuit 232 and the forward predict ion frame memory circuit 234 in the exchanged state with frame data being stored in the interframe memory 252 (FIG.
Also in this case, the reverse reproduct ion can be made with a simple structure by exchanging the switching operations of the motion vectors MVN and MVP and the selection circuit 230 in the forward prediction and the backward prediction.
Thus, the original video signals can be reproduced \0 by the normal reproduction and the reverse reproduction.
Operation of the Embodiment In the construction above, the input video signal VDIN are converted to digital signals at the picture L data inputting unit 2, the amount of data thereof is reduced to 1/4, and then the video signals are converted to video signals VD (FIG. having sequentially continuous frame data AO, Cl, C2, B 3 t After frame data AO, Cl, C2, B3, are divided into groups of frames consisting of un i t s of 6 frames in the reordering circuit 4, the video signals VD are reordered in order to be coded, AO, A6, B3, Cl, C2, C4, CS (that i s frame data AO, A6 to be intraframe I -r coded, frame data B3 to be inter frame coded at the level 1 and frame data C1, C2, C4 C5 to be interframe coded at the level 2).
Moreover, the frame group index GOF, the forward prediction reference index PID, the backward predict ion reference index NID and the temporary index TR represent ing the order in the group of frames are generated in the reordering circuit 4 and are output ted synchronously with the frame data AO, A6, B3, Cl, C2, 4 After reordered in order to be encoded, AO, A6, B3, C1, C2, C4, C5, C7 the frame data are S out pu t t ed wi th the predetermined iden t i f ica t ion data o• GOF, PID, NID and TR added to them. Thus, the b subsequent intraframe coding processing and interframe cod ing processing can be simpl if ied, The picture data DVN reordered are outputted to 'the adaptive prediction circuit 10 at a predetermined timing after they are divided into macro unit blocks in t. the blocking circuit 84 of the motion ve c tor detect i ng circu it 6.
Among the picture data DVN reordered, frame data AO, A6 and A12 which are each of leading frame data to be intraframe coded are directly outputted to the I IL W sub t rac t ing circu i t 8.
The frame data AO, A6 and B3 are respect i vely stored in the forward prediction frame memory circuit 89, the backward prediction frame memory circuit 88 and Sthe interframe memory circuit 90 to serve as references for detecting the motion vectors of the backward irediction and the forward prediction.
That is, the frame data AO and AS which are stored in the forward prediction frame memory circuit 89 and tO the interframe memory circuit 90 are outpu t t ed to the variable reading memory circuits 94 and 95, and with respect to picture data of the frame data Cl and C2 picture data within a predetermined motion vector detecting range are outputted to subtraction circuits "I KN 0 to KN 2 5 5 and KP 0 to KP 2 5 5 in a parallel manner at the time of input t ing the frame data Cl and C2 to the subtraction c ircui ts KN0 to KN 2 5 and KP 0 to KP25 The absolute results of the subtraction of the sub raction circui ts KNg to KN255 and KP to KP 25 are accumulated for each macro unit block in the absolute value summing circuits 100 and 101, and thereby difference data are obtained when predicted frames are sequentially shi f ted within the mot ion vector detecting range about the each of the macro unit b ocks of f rame 94 data Cl and C2.
Simi larly, the frame data B3 and A6 which are stored in the in terframe memory ci rcu i t 90 and the backward pred i c t ion frame memory ci rcu i t 88 are output ted to the variable reading memory circuits 94 and and with respect to picture data of the frame data C4 and CS picture data wi thin a prede t ermined mot ion vector de tec t ing range are output t e d to subtraction circuits KN 0 to KN 2 5 5 and KP 0 to KP256 in a parallel IO manner at the time of input t ing the f rame data C4 and to the subtraction circuits KN 0 to ICN255 and KP 0 to KP 2 Thus, di f f erence data can be obtained through the absolute value summing circuits 100 and 101 when i pred ic t ed frames are sequential ly sh i f t ed wi thin a motion vector detecting range about each of the macro unit blocks of frame data C4 and C The minimum values of the d i f f e rence data of the frame data C1, C2, C4 and CS are de t e ct ed in the compar i son c i rcuits 102 and 103, and t he reby respec t i ve motion vectors of the forward pred i c t ion and the backward prediction are detected.
The result of preference comparison between di f f erence data, obtained in a state of a predicted ~o r r _s frame being not moved, and a minimum difference data, ob t a i n ed through the compar i son c i rcu i ts 102 and 103 is ob t a ined in the compar i son c i rc its 105 and 106, and thereby 0 vector data MV 0 and detected mot ion vectors, outputted from the comparison circuits 102 and 103 are select i vely outputted according to the preference of FIG. 9. Thus, motion vec tors are selected so that video signals can be as a whole sent efficient ly.
The motion vectors about the frame data Cl, C2, C4 and C5 are output ted through the selection c i rcu i ts 139 and 140 and are also provided to the adding circuits 120 to 125 and the halving circuit 128. Thus, the .e operation of the equations and is carried out, so that predicted motion vect ne* .'V3PY and MV3NY of the o frame data B3 are detected.
Thus, with respect to the frame data B3 its motion vector are detected wi thin the motion vector detecting range on the basis of the predicted motion vectors MV3PY and MV3NY.
That is, for the frame data B3, the frame data AO and A6, stored in the forward prediction frame memory circuit 89 and the backward prediction frame memory circu i t 88, are outputted to the variable reading memories 94 and 95, and the picture data, which have shi f t e d within the motion vector de tect ing range by t he predicted motion vectors MV3PY and MV3NY relatively to the picture data of the frame data BS, are outputted from the variable reading memories 94 and 95 to the subtract ion circuits KN 0
-KN
2 5 5 and KP 0
P
2 5 5 in a parallel manner.
By t h i s operation, difference data can be obtained on the basis of the predicted mot ion vectors MVSPY and MV3NY through the absolute value summing circu its 100 tO and 101, and the motion vectors of the frame data B3 are de t e c t ed by adding the predi c t ed motion vectors MV3PY and MV3NY to the selective outputs of the S selection circuits 107 and 108 in the adding circuits 135 and 136.
S The p i cture data DVN is ou tput ted to the adapt ive prediction circuit 10, in which mean values of the picture data of the luminance signal and the chrominance signal are obtained for each macro unit block through the mean value memory circuit 150 and the se mean values data are ou t pu t ted as d i rect current data DC to the transmission data composition circuit 32 and selecting circuit 152.
In add i t i on, the picture data DVN, ou tput ted t o t he adapt i ve prediction circuit 10 is selectively predicted with re recons t forward rence c ed red i c to frame data AO, A6 and in the adding circuit 28) tion frame memory circuit B3 (frame stored in 155, the data the backward prediction frame memory circui t 154 and the S interf rame memory circui That is, to selecti data B3 the frame data prediction frame memory s prediction frame memory t 1 through the selection ci reading memory 160 and i shifted by amount of the frame data FN and FP as prediction and forward p On the other hand, to the subtract ing circu results of the subtract frame data FN and FP and result of the interpolat :O the halving circuit 165) t 156.
vely predict about the frame AO and A6 stored in the forward circuit 155 and the backward circuit 154 are outputted rcuits 158, 159 to the variable 61, where the frame data are motion vectors to construct the results of the backward rediction, respectively.
the frame data B3 is outputted its 165, 166 and 167, where the on of the frame data from the the frame data FNP as the ive prediction (outputted from which is constructed by the
I
o frame da The absolute 168, 169 ta FN and FP.
results of the subt values in the absol and 170 and are the ract ion are converted to ute value summing circuits n accumulated for each
I
F macro unit block, so that the backward pred i c t ion d i f f e rence data AFN, the forward prediction d i f f erence data AFP and the i n t erpolative predict ion di f f erence data AFNP (FIG. 1) are obtained through the absolute S value summing circuits 16S, 169 and 170, respec t ively.
The minimum value of the difference data AFN, AFP and AFNP is detected in the comparison circuit 171.
Preferential comparison as shown in FIG. 12 is carried out between the minimum value and the \O difference data relative to the direct current data DC in the compar i s on c i rcu i t 174, so that the results of the prediction selection of the backward prediction, S. the forward prediction, the interpolative prediction and the intraframe coding processing are detected for I each macro unit block through the comparison circuit 174 On the other hand, in select ive prediction of the frame data Cl, C2, frame data AO, B3 stored in the forward predict ion frame memory circuit 155 and the 0 interrf ame memory circuit 156 are outputted to the variable reading memory 160 and 161 where frame data FN and FP as the results of the backward predict ion and e the forward predict ion are constructed, respectively.
Thus, with respect to the frame data Cl and C2 W difference data AFN of the backward prediction, FP of t he f o rward pred i c t i on and AFNP o f t he in t e rpo l a t i ve predict ion are obtained in the subtracting circuits 165 t 166 as in the frame data B3, so that the results of the prediction selection of the backward pre d i c t ion the forward prediction, the interpolative prediction and the intraframe coding processing is outputted through the comparison circuit 174 for each macro unit block \O On the other hand, in selective prediction about the frame data C4 and C5, frame data B3 and AO which are stored in the interframe memory circuit 156 and the e Sbackward prediction frame memory 1 54 are outputted to the variable reading memory 160 and 161, where they are shi fted by amount of the motion vectors to produce frame data as the result of the prediction.
Thus, the results of the prediction selection of t. *the frame data C4 and C5 are detected for each macro un it block t hrough the comparison c i rcu it 17 4 s imi larly to the frame data B3, Cl and C2 The frame data FN, FP and FNP, which are the results of the backward pred ct ion, the forward predict ion and the interpolative prediction, respec t ively, and the dire c t current level data DC are 100 selec t i v e I 't put ted according to the r e s u t s of the prediction ct ion through the select ing circuit 152, so that the predicted data DPRI constructed are outputted to the subtracting circuit 8.
SOn the other hand, the results of the prediction selection are outputted as the predict ion index PINDEX I rom the s e l ec t ion c ircu it 180 to the t ransmiss ion data compos i t i on circuit 32.
The predicted data DPRI is subtracted from the )O picture data DVN in the subtracting circuit 8 to ethereby produce difference data Dz.
The difference data DZ is converted for each macro unit block according to the DCT technique in the discrete cosine transformation circuit 12.
The output data of the discrete cosine transformat ion circuit 12 are weighted according to the error data ER, outputted from the motion vector e a d detecting circuit 6, in the mul t iplication circuit 14 and are then requant i zed in quant ized s teps i ze according to the error data ER, the amount of the ou t put da t a of the discrete cosine t rans f orma t i on circuit 12 and the amount of the input data of the buffer circuit 21 in the requant izing circuit 18.
Thus, the requantization of the data in quantized 101 s t eps i ze a c cording to t he error data ER, the amount of the output data of the discrete cosine tranformation circuit 12 and the amount of the input data of the bu ffer circuit 21 as well as the weigh t ing processing thereof enables video signals to be sent at transmi tted wi th high quali ty and each frame data to sent at predetermined amount of data.
The requant i zed picture data are variable length coded in the run-length Huffman coding circuit 30, and I then in the transmission data composition c i rcu i t 32 variable length coded data of the motion vectors MVN and MVP, prediction index PINDEX, forward prediction reference index PID, backward prediction reference o* index NID, temporary index TR, etc. are added to the l picture data, which are then converted to transmiss i on data DATA according to the predetermined format (FIGS.
13 to IS) and are recorded on a compact disc.
Moreover, the requant i zed picture data is inversely converted to input data of the discrete O cosine tranformat i n circuit 12 through the inverse requant i zing circuit 22, the inverse mult ipl icat ion c i rcui t 24 and the inverse d i screte cos ine tranformation circuit 26 and is then added to the predicted data DPR outputteu from the adaptive prediction circuit 102 MW 1 i n t he add ing c i rcu i t 28 to be converted to f rame data DF which reconstructs the input data of the subtracting circuit 8.
Thus, the frame data DF is stored in the forward S pred i ct ion frame memory circuit 155, the backward prediction frame memory circuit 154 and the interframe memory circuit 156 of the adaptive predict ion circuit and is used as the forward prediction frame data and the backward prediction frame data.
to Thus, predicted data DPRI about frame data subsequen t y to be inputted to the sub t ract ing circuit o. o 8 is generated, and sequentially transmission frame :data DATA is obtained.
Among the picture data DVN reordered, frame data AO, B3 and A6 are stored in the forward prediction frame memory circuit 89, interframe memory circuit and the backward prediction frame memory circuit 88, respectively, and thereby motion vectors MV3P, MV3N, BV1P, MV1N, MV2P, MV2N of frame d.ta B3 C1, C2 are detected through the se ect ion c ircu t s 139 and 140 The picture data DVN is output ted to the adaptive predict ion circuit 10, in which mean values of the picture data of the luminance signal and the chrominance signal are obtained for each macro unit 103 block through the mean value memory circui t 150 and are outputted as direct current data DC to the transmission da t a compos i t i on c i rcu i t 32 In addition, the picture data DVN inputted to the b adaptive prediction circuit 10 is selectively predicted with reference to frame data AO, A6 and B3 (frame data reconstructed in the adding circuit 28), and thereby difference data AFN, AFP and AFNP of the backward pre d i c t i on, the forward predi c t ion and the interpolative prediction (FIG. 1) can be obtained, respectively.
so Among the differee dfere data AFN, AFP and AFNP, the Sdi f ference data having the smal lest amount of data is selected, and thereby the result of selective b prediction is detected for each macro unit block.
The frame data FN, FP and FNP, which are the results of the backward pred i c t ion, the forward predict ion and the interpolat ive prediction, respect ive ly, are selectively outputted according to o the resul ts of the prediction selection, so that the predicted data DpRR constructed are output ted to the subtracting circuit 8.
On the other hand, the results of the prediction selection are outputted as the identification data 104 PINDEX to the transmission data compos i t ion ci rcu i t 32.
The predicted data DPRI is subtracted from the picture data D\VN in the subtract ing circuit 8 to thereby rroduce di f f erence data D The di f ference data D is convert ed for each macro un it block according to the DCT technique in the discrete cosine transformation circuit 12.
The output data of the discrete cosine conversion circuit 12 are weighted according to the error data ER, out put ted f rom the mo t i on ve c tor de t ec t i ng c i rcu i t 6 in the mul t iplication circuit 14 and are then requantized in quantized stepsi ze according to the error data ER, the amount of the output of the discrete e e cosine transformation circuit 12 and the amount of the i input of the buffer circuit 21 in the requantizing circ t 18 Thus, the requantization of the data in quantized s t eps i ze accord i ng to the error data ER, the amount of o. the output of the d i scre t e cosine transforma t i on a c ircuit 12 and the amount of the input of the bu f f er circuit 21 as well as the weighting processing thereof enables v i deo s i gnals to be transmi t ted with high quality and each frame data to he sent at predetermined amount of data.
105 I 1. II The requant i zed data are variable length encoded in the run-length Huffman encoding circuit 30 and then in the transmission data composition circuit 32, are then variable length encoded according to a D predetermined format, and are thereaf ter recorded on a compact disc In a predetermined format.
On the other hand, motion vectors detected in the motion vector detecting circuit 6 are outputted to the run-length-Huf fman circuit 34, where motion vectors are converted to vectors for one frame and are processed by adaptive encoding, and then they are recorded on a compact disc together with remainder data, and data, representing kind of the motion vectors, (that is, which can be detected by the forward pred i ct i on .e I reference index PID, the backward prediction reference index NID and the temporary index TR). Moreover, the r. requanti zed picture data is inversely converted to input data of the discrete cosine transformation circuit 12 through the inverse requ nt izing circuit 22, 0 the inverse mul t ipl ication circuit 24 and the inverse discrete cosine transformation circuit 26 and is then added to the predicted data DpRI outputted from the adaptive prediction circuit 10, in the adding circuit 2S to be converted to frame data D
F
which reconstructs 106 -h
I
0* the input data of the subtracting circuit 8.
Thus, the frame data D F is s t ored in the adapt i v e prediction circuit 10 and is used as the forward prediction frame data and the backward prediction frame S data.
Thus, predicted data DpRI about frame data subsequently to be inputted to the subtract ing circuit 8 is generated, and sequentially transmission frame data DATA is obtained.
tO On the other hand, in the receiving circuit 200, reproduced data DpB which is obtained by playing back the compact disc is inputted to the receiving circuit 201, where the head of each group of frames is detected. Then, the reproduced data DPB is outputted together with the resul t of the detection to the reordering circuit 203 where it is reordered to produce picture data DVPBN having continuous frame data sequen t ially int raf r ame coded and inter r ame coded, PAO, PA6, PB3, PC1, PC2 The frame data reordered is outputted through the bu f e r c i rcuit 204 to the separa t i on c i rcu i t 206, where the frame group index GOF, the forward predi c t ion reference index PID, the backward prediction reference index NID, etc. which have been added to the frame data 107 -d and transmi t ted are reconstructed.
The frame data which separation c i rcuit 206 i s run-length Huffman invers inverse requantizat ion ci mul t iplicat ion circuit 21 inverse transformat ion ci of the discrete cosine tr recons tru c ted I0 The output data of t has been outputted from the inverse converted through the e coding circuit 210, the rcui t 211, the inverse 2 and the discrete cosine rcui t 213, so that the input data ansformat ion circui t 12 i s he discrete cosine inverse tran s forma t i data DPRI, o circuit 214, resultant ou S pred i c t i on c In the transmiss i on transmitted as the predi c.i. c ircu it 230 sequentially thereby obta The fra on circuit 213 utput ted from in the adding tput data DTIN ircuit 214.
adapt i v predi frame data in direct current cted data DPRI is added to the adaptive circuit 218 is outputte the predicted pred and d to ion e adaptive ct ion circuit 214, about traframe coded the I e ve data DC is ou tput t ed through the selection and the output data DTIN which reco ined me da nstruct frame data through the adding ta AO and A6 among AO, A6 and A12 are circuit 218.
the output data DT N of the adding circuit 218 are stored in the Ti ii 108
I
S backward prediction frame memory circuit 232 and forward pred i c t ion frame memory circuit 234 for decoding subsequent frame data B3, Cl, C2, C4 More specifically, the frame data AO and A6 which Sare stored in the backward prediction frame memory circuit 232 and forward prediction frame memory circuit 234 are outputted through the selection circuits 236 and 238 to the variable reading memories 240 and 242 Where the frame data AO and A6 are displaced by amount of the motion vectors MVN and MVP for each macro unit block and then outputted, so that frame data as the results of the backward predict ion and the forward Spred i c t i on are constructed about the frame data B3.
The frame data outputted from the variable reading memories 240 and 242 are input ted to the adding circuit 248 and the halving circuit 250 to thereby construct frame data as the results of the interpolative pred i c t ion.
"The frame data as the results of the backward r prediction, the forward prediction and the interpolative prediction are output led together with the direct current data DC to the selection circuit 230, from which they are selectively outputted according to the ident i f ication data PINDEX, so that 109 L II predicted data DpR I is constructed about the predetermined frame B3 Thus, the predicted data DpRI is out putted to the adding circuit 218 to thereby decode the frame data B 3 The frame data B3 decoded is stored in the interframe memory circuit 252 and is used as frame data for decoding frame data Cl C2, C4, together with the frame data A6 and AO stored in the backward prediction frame mem. ry circuit 232 and the forward O prediction frame memory circuit 234.
That is, the frame data A6 and B3 stored in the forward prediction frame memory circuit 234 and the interframe memory 252 are outputted to the variable reading memories 240 and 242 through the selection circuits 236 and 238, so that frame data as the results of the backward prediction, the forward prediction and the interpolative prediction are constructed about the frame data CI and C2.
On the other hand, the frame data B3 and AO stored O in the interframe memory circuit 252 and the backward pred i c t i on frame memory c i rcuit 232 are outpu t ed to the variable reading memories 240 and 242, where frame da t a as the results of the backward pred i c t ion, the forward prediction and the interpolat ive prediction are 110 L L- II I constructed about Thus, predic C2, C4 are ob 230 and is output Sdecode the frame The decoded are output ted aft order through the selection circuit 10 signals highly ef reconstructed.
On the other mode, the forward prediction motion the f ted da tained rame data C4 and ta DPR regarding frame data Cl, through the selection circuit ted to the adding circuit 218 to data C1, C2, C4 frame data AO, A6, B3, C C2, C er they are rearranged in the original delay circuits 262 and 266 and the 260, 264 and 268, and thereby video ficiently coded and transmitted can be hand, in the reverse reproduction prediction and the backward vectors are switched and then 5 inputted to the variable reading memories 240 and 242, and at the same time, the contact switching operation of the selection circuit 23 is performed in the forward prediction and the backward predict ion, so that as in the normal reproduction mode, predicted data DpRI is obtained to reconstruct the original frame data.
The frame data AO and A6 among the output data DTIN of the adding circuit 218 are used in the adaptive prediction circuit 214 for decoding subsequent frame data B3, Cl, C2, C4 and the decoded frame data AO, 111
I_
A6, B3, C C2, C4, are arranged in the original order in the adaptive prediction circuit 214 and are then outputted, and thus moving picture video signals which have highly e f icient y coded and sent can be b reproduced.
Effects of the Embodiment According to the construct ion above described, frame data are divided into groups of frames including 6 frame units, the leading frame data of each group of 10 frames is intraframe coded, the frame data and frame data, intraframe coded, of the subsequent group of frames are set as predicted frames, and the fourth frame data of the group of frames is interframe coded and sent. Thus, video signals can be efficiently a .I encoded by a simple construction with deterioration of picture quality effect ively avoided and hence can be e f f i c ient ly transmitted with high quality.
The fourth frame data of the group of frames and frame data intraframe coded of each group of frames and O the subsequent group of frames are set as predicted frames, and the remaining frame data are thereby in t erframe coded and sent. Thus, degradation of picture quality can be effect ively avoided and the remaining frame data can e more ef f icient ly encoded.
112
W--
Data represent ing each predicted frames are added to f rame data to be interf rame coded and then transmitted, and thereby the data transmitted can be decoded with a simple construction.
In the construction above described, double vectors MV1N, MV2P, MV4N, MV5P and triple vectors MV3N, MVSP are converted to vectors per one frame and are processed by variable length coding with preference to vectors hav ing high probability of appearance, Thus, 10 coding processing is performed with a common table and hence motion vectors can be optimized with a simple construct ion.
Other Embodiments In the embodiment above, it is illustrated I that: frame data are divided into groups of frames including 6 frame units; the leading frame data thereof is intraframe coded; and the fourth frame data is *n interframe coded at the level 1 whi le the s econd, the
C
third, the fifth and the sixth frame data are S in t e r f rame coded at t he I evel 2 The present inven t i on is not I imi ted to such processing but the int raf rame coding processing and the interframe coding processing at the ievel 1 and 2 are variously combined according to needs.
113 9 d I As shown in FIG. 21, for example, frame data are d i v i ded into groups of frames incl uding 6 frame units, leading frame data AO, A6 are intraf rame coded, the third and the fifth frame data B2 and B4 may be interf rame coded at the level 1, and the second, the fourth and the sixth frame data C1, C3, and C5 may be inter frame coded at the level 2.
In this case, frame data AO and B2, AO and A6, B2 and B4, AO and AG, and B4 and A6 may be set as p e predicted frames for frame data Cl, B2, C3, B4 and respectively and may be predicted by the adaptive prediction circuit as illustrated in FIG. 22.
More specifically, as shown in FIG. 23, picture data DV (FIG. 23(A)) is constructed by reordering fram I5 data AO, Cl, B2, CS, in the processing order of AO A6, B2, Cl, B4, C3, C5, "a d simultaneously a forward prediction reference index PID (FIG. 23(B)) an a backward prediction reference index NID (FIG. 23(C)) are constructed; SWhere the values of 0, 2 and 4 of the forward prediction reference index PID and the backward pred i c t ion reference index NID represent that frame data AO and AG to be intraframe coded, frame data B2 and frame data B4 are predicted frames, respective y.
e r d 114 rl The picture data D
F
reconstructed on the basis of the picture data D is provided to prediction frame memory 154 and the circuit 156, and the output data of memory circuit 166 is provided to t circuit 302.
The contacts of the selection on the side of the backward predict 154 the backward interframe memory the interframe he interframe memory circuit 300 are held ion frame memory the writ A6 t memo Thus, the backward prediction fo;ward prediction frame memory irg mode at the time of :nputti be intraframe coded, r.nd then circuits 156 and 302 are swi f ram 155 ng the tche e memory 154 and are switched to rame data AO and in t e r f r ame d to writing and the fifth frame data AO, frame memory S and 16 mode at the time crf nput ting the third frame data B2 and B4. In this manner, A6, B2, B4 can be stored in each of the circuit 154 to 156 and 302 (FIGS. 23(D) (G o Thus, the contacts of select ion circuits 304 and 305 are sequentially switched in response to switching signals SW8 and SW9 (FIGS. 23(H) and and their selected outputs are outputted to variable reading memory circuit 160 and 161. By this operation, frame 115
I
Cdata FN, FNP and FP as the results of predict ions can be cons t ructed sequentially about frame data to be interframe coded; B3, Cl, B4, C3, Also when the processing sequence of frame data is switched in such a manner, frame data can be easily decoded in a receiving unit by adding the forward prediction reference index PID and the backward predict ion reference index NID which represent predicted frames of each frame data.
Moreover, also when frame data are processed in the processing sequence thereof as shown in FI I frame data may be sequentially processed with reference to the forward prediction reference index PID and the backward prediction reference index NID, whereby selective prediction processing can be performed by using the adaptive prediction circuit as illustrated tn FI 22.
The motion vector detection circuit and the S adapt i v e prediction circuit o f the receiving unit may be bebuil i imilarly as in FIG. 22, and operations Sthereof may be switched with reference to the forward prediction reference index PID and the backward predict ion reference index NID. In this fashion, the present invention can be app l ed to the ca&e where 116 Wg? frame data are transmi t ted in the processing sequence thereof as shown in FIG. and hence the range of application of the transmitting un it and t he re ce i v ing unit can be enlarged.
Furthermore, frame data D F may be d rect ly inpu t t ed to the forward prediction frame memory circuit 156 by swi :ching the contact of the select ion circuit 300, of which the operation may be switched on the basis of the forward pred ct ion reference index PID and to the back ward predict ion reference index NID. In this case, the selection circuit 300 may be adjusted to a case where frame data are processed in the processing sequence as shown in FIG, 24.
That is, the first frame data AO is intraframe coded a..d then transmi t ed, and the third frame data B2 is transmitted by setting the frame data AO as the r predicted frame.
Then, the f i f th frame data B4 and the seventh frame data B6 are sent by set t ing the frame data B2 and E 4 which precede them by two frames respect i vely as predicted frames, and the frame data Cl, C3, interposed are sent by set t ing frame data AO and B2, B2 ana B4 as predicted frames, Thus, p rmined pd fame da a Thus, predetermined predicted frame data are 117 en t e red into the forward pred i ct ion f r ame memory cir cui circui 302 wi 154 154, ref t a er index PID and NID, with t being switc transmissio (5-2) signals are 1/4, and th processing performed i compression 15 various val intraframe processing compression (5-3) A0 case where he hed n t In pr en and s i is ues cod nay be Fur vid he backward prediction frame memory nd the interframe memory circu i ts 156 and ence to the forward pred i c t ion reference the backward prediction reference index contact of the selec t i on c i r cu i t 300 over, and thereby video signals of the ormat can be adapt i ve ly pred ic ted, the embodiment above described, video eviously compressed in amount of data to the case in which the intraframe coding the i n t e r f r ame coding processing are llustrated. However, the amount of data not limited to this but may be set to accor'ding to needs; for example, the ing processing and the inter frame coding be directly carried ou t wi th data ing omitted.
thermore, in the embodiment above, the eo signals are recorded on a compact disc ct o r is described. The present invention is not limited to r tnis bu signals magneti t may be widely applied to a are recorded on various rec c tape and a case where vide case where video ording media such as o signals are 118 dWi d i rect ly transmit ted through a predetermined transmission channel.
In the embodiment above, it is illustrated that succeeding frame data are divided in 6 frames, and detected motion vectors of which frame are 2 or 3 frames away from each other are transmitted. The present invention is not limited to this feature but may be widely applied to a case where motion vectors between frames which are frames away from each other are transmitted.
Furthermore, in the embodiment above, the case where video signals are recorded on a compact disc is described. The present invention is not limited to this but may be widely applied to a case where video signals are recorded on various recording media such as Smagnetic tape and a case where video signals are directly transmit ted through a predetermined transmission channel.
According to the present invention, digi tal video S signals are d i v i ded into groups of p r e d e ermi n e d f rame units; digi tal video signals of each group of frames are intraframe coded and are interframe coded and S: transmitted with reference to the previous and next digi ta l video signals intraframe coded. Thus, the 119 present invent ion provides a video signal transmi ssion system which enables encoded and tra.nsmi t t quality being effecti possible to highly ef video signels.
According to the provided a video sign digital video signals I0 processing are divide intraframe coded and previous and next dig video signals to be efficiently ed with deterioration of picture vely avoided and thereby makes it ficient ly transmit high quality present al trans to be c d into a transmit invention there is mission system in which: oded by interframe group which is first ted with reference to ital video signals interframe coded and another group which is processed with re f e rence to the dig i tal video s ignals first interframe coded, and transmitted whereby video signals are more e f f iciently coded and sent with degradation of picture quality being avoided with a s imp e construction.
According to the present invention, there is provided a video signal transmission system in which 0 ident i f ica t i on data of the digital v ideo signals, which are re f e rred to for the inter frame cod i ng processing, are added to the digital video signals, which are coded by the inter frame processing and transmitted, and the digital video signals with the identification data are 120 I I W t hen t ransmi t ted, whereby the digital video signals transmi t ted are decoded with a simple construct ion.
According to the present invention, motion vectors between frames which are frames away from each other S are converted to vectors per one frame, optimized and then sent. Thus, motion vectors can be optimized and transmitted with a simple construction, Wh i le there has been descr i bed in connection with the preferred embodiments of the invention, it will be t o obvious to those skilled in the art that various changes and modifications may be made therein without departing from the invention, and it is aimed, therefore, to cover in the appended claims all such changes and modifications as fall within the true sp i r i t and scope of the inven t i on.
*o r :°teo 1.21
I

Claims (26)

1. An encoding method for encoding a digital video signal including a plurality of pictures, said method comprising the steps of: encoding a first picture of one of said plurality of pictures as an intra coding picture; encoding a second picture of one of said plurality of pictures as a first inter coding picture with the use of a reconstructed picture of a picture temporally situated before the second picture; and encoding a third picture of one of said plurality pictures as a second inter coding picture with the use of a reconstructed picture of a picture temporally situated before the third picture and a reconstructed picture of a picture temporally situated after the third picture, where at least one of said reconstructed picture of a picture temporally situated before the third picture and said reconstructed picture of a picture temporally situated after the third picture is a reconstructed picture of said second picture. 15 2. The encoding method according to claim 1 wherein each picture has a picture header, said method comprising the further step of transmitting information in each picture header identifying which of said intra coding picture, said inter picture coding and said second inter coding picture. S3. The encoding method according to claim 1, wherein said reconstructed picture of a picture temporally situated before the second picture is a reconstructed -eeee picture of said first picture.
4. The encoding method according to claim 1, wherein said reconstructed picture of a picture temporally situated before the second picture is a reconstructed picture of another picture coded as a first inter coding picture.
5. The encoding method according to claim 1, wherein said reconstructed picture of a picture temporally situated before the third picture is a reconstructed picture of said first picture and said reconstructed picture of a picture temporally .situated after the third picture is a reconstructed picture of said second picture. IN \LIBCCIOOBE1 -123- o r o
6. The encoding method according to claim 1, wherein said reconstructed picture of a picture temporally situated before the third picture and said reconstructed picture of a picture temporally situated after the third picture are reconstructed pictures of pictures coded as first inter coding pictures.
7. The encoding method according to claim 1, wherein said reconstructed picture of a picture temporally situated before the third picture is a reconstructed picture of said second picture and said reconstructed picture of a picture temporally situated after the third picture is a reconstructed picture of said first picture.
8. The encoding method according to claim 1, wherein the step of encoding a second picture, said second picture being divided into a plurality of macro unit blocks and each macro unit block having a macro unit block header, comprises: selecting one of intra coding and inter coding for each macro unit block, encoding said each macro unit block using the selected one of said intra coding and said inter coding, and transmitting information in each header of said each macro unit block identifying which of said intra coding and said inter coding.
9. The encoding method according to claim 1, wherein the step of encoding a third picture comprises the further steps of: selecting one of intra coding, forward predictive coding, backward predictive coding and interpolative predictive coding for each macro unit block, said each macro unit block being produced by dividing said third picture into a plurality of macro unit blocks, each macro unit block having a macro unit block header, encoding said each macro unit block using the selected one of said intra coding, forward predictive coding, backward predictive coding and interpolative predictive coding, and transmitting information in each macro unit block header identifying which of said intra coding, said forward predictive coding, said backward predictive coding and said interpolative predictive coding. (C, -tr !r\ IN:\LIOCCI0515:mxI -124- The encoding method according to claim 1, further comprising the steps of: quantizing data based on said first picture, said second picture and said third picture, wherein each of said pictures is divided into a plurality of macro unit blocks, each of said macro unit blocks having a macro unit block header, and transmitting information in each macro unit block header identifying a quantization step size for said macro unit block.
11. A decoding method for decoding a coded data produced by encoding a digital video signal including a plurality of pictures, said method comprising the steps 1o of: decoding a coded data of a first picture encoded as a intra coding picture to produce a first decoded picture; decoding a coded data of a second picture encoded as a first inter coding picture with the use of a decoded picture of a picture temporally situated before the 1 5 second picture to produce a second decoded picture; and decoding a coded data of a third picture encoded as a second inter coding picture with the use of a decoded picture of a picture temporally situated before the third picture and a decoded picture of a picture temporally situated after the third picture to produce a third decoded picture, where at least one of said decoded picture of a picture temporally situated before the third picture and said decoded picture of a Spicture temporally situated after fthle third picture is said second decoded picture.
12. The decoding method according to claim 11, wherein each picture has a picture header, said method comprising the further step of separating information from each picture header identifying which of said intra coding picture, said first inter coding picture and said second inter coding picture.
13. The decoding method according to claim 11, wherein said decoding picture of a picture temporally situated before the second picture is said first decoded picture. IN:\LIBCC100515:mxl -125-
14. The decoding method according to claim 11, wherein said decoded picture of a picture temporally situated before the second picture is a decoded picture of another picture encoded as a first inter coding picture. The decoding method according to claim 11, wherein said decoded picture of a picture temporally situated before the third picture is said first decoded picture and said decoded picture of a picture temporally situated after the third picture is said second decoded picture.
16. The decoding method according to claim 11, wherein said decoded picture of a picture temporally situated before the third picture and said decoded picture of a picture temporally situated after the third picture are decoded pictures of pictures coded as a first inter coding picture respectively.
17. The decoding method according to claim 11, wherein said decoded picture of a picture temporally situated before the third picture is said second decoded picture and said decoded picture of a picture temporally situated after the third picture is said first decoded picture.
18. The decoding method according to claim 11, wherein the step of decoding a coded data of a second picture, said second picture being divided into a plurality of macro unit blocks and each macro unit block has a macro unit block header, comprises the further steps of: separating information from each macro unit block header identifying which of o intra coding and inter coding, and decoding said each macro unit block according to the one of said intra coding and said inter coding.
19. The decoding method according to claim 11, wherein the step of decoding a coded data of a third picture, said third picture being divided into a plurality of macro unit blocks and each macro unit block having a macro unit block header, comprises the further steps of: -3C RAn;l 13 C) IN WI.ICCIOO 1 I n~l -126- separating information from each macro unit block header identifying which of intra coding, forward predictive coding, backward predictive coding, and interpolative predictive coding, and decoding said each macro unit block according to the one of said intra coding, said forward predictive coding, said backward predictive coding, and said interpolative predictive coding. The decoding method according to claim 11, wherein said first, second and third pictures are each divided into a plurality of macro unit blocks, each macro unit block having a macro unit block header, further comprising the steps of: separating information from each macro unit block header identifying a quantization step size for each macro unit block, and inverse quantizing said coded data of said first picture, said second picture and said third picture.
21. An encoding method for encoding a picture being divided into a 15 plurality of macro unit blocks, wherein each macro unit block has a macro unit block header, and being temporally situated between a first picture and a second picture, said method comprising the steps of: selecting one of intra coding, forward predictive coding, backward predictive coding and interpolative predictive coding for each macro unit block; encoding said each macro unit block using the selected one of said intra coding, said forward predictive coding, said backward predictive coding and said interpolative predictive coding; and transmitting information in each macro unit block header identifying which of said intra coding, said forward predictive coding, said backward predictive coding and said interpolative predictive coding.
22. The encoding method according to claim 21, further comprising the steps of quantizing data based on said picture temporally situated between a first picture and a second picture, and -127- transmitting information in said each macro unit block header identifying a quantization step size for each macro unit block.
23. A decoding method for decoding a coded data produced by encoding a picture temporally situated between a first picture and a second picture, said first and second pictures each being divided into a plurality of macro unit blocks and each macro unit block having a macro unit block header, said method comprising the steps of: separating information from each macro unit block header identifying which of intra coding, forward predictive coding, backward predictive coding and interpolative predictive coding; and decoding said each macro unit block according to the one of said intra coding, said forward predictive coding, said backward predictive coding and said interpolative predictive coding.
24. The decoding method according to claim 23, further comprising the S"steps of: 15 separating information from said each macro unit block header identifying a quantization step size for said each macro unit block, and inverse quantizing said coded data of said picture temporally situated between a first picture and a second picture. An encoding method for encoding a digital video signal including a 20 plurality of pictures, said method comprising the steps of: reordering said plurality of pictures, wherein each picture has a picture header; encoding said reordered plurality of pictures as an intra coding picture or an inter coding picture respectively to produce a coded data; and appending temporal information to each picture header of said coded data identifying an input order of said plurality of pictures.
26. A decoding method for decoding a coded data produced by encoding a digital video signal including a plurality of pictures, wherein each picture has a picture header, said method comprising the steps of: separating temporal information from each picture header of said coded data identifying an input order of said plurality of pictures in an encoder; IN \LIBCCI00515MXL -128- decoding said coded data encoded as an intra coding picture or an inter coding picture to produce a plurality of decoded pictures; reordering said plurality of decoded pictures according to said temporal information.
27. An encoding method for encoding a motion vector, said method comprising the steps of: detecting a motion vector between a first picture and a second picture; dividing a value based on said motion vector by a predetermined value to produce a quotient value and a remainder value; variable length encoding said quotient value with the use of a table of variable length code; and encoding said remainder value. seof28. The encoding method according to claim 27, further comprising the step of encoding information indicating the number of bits of said remainder value. 1: 5 29. A decoding method for decoding a coded data to produce a motion vector, said method comprising the steps of: separating a motion vector data including a quotient data and a remainder data ::i from said coded data; and •i decoding said motion vector data with the use of a table of variable length code.
30. The decoding method according to claim 29, wherein said motion vector data includes information indicating the number of bits of said remainder data.
31. A motion vector detection method for detecting motion vectors between a plurality of pictures, said method comprising the steps of: detecting a first motion vector between a first picture and a third picture temporally situated between tht first picture and a second picture; determining a search area of a second motion vector between said first picture and said second picture based on said first motion vector and an interval between said 9 first picture and said second picture; and IN \LIaCCI0515 mxl
129- r r s r I o detecting said second motion vector according to said search area. 32. An encoding apparatus for encoding a digital video signal including a plurality of pictures, comprising: means for encoding a first picture of one of said plurality as an intra coding picture; means for encoding a second picture of one of said plurality pictures as a first inter coding picture with the use of a reconstructed picture of a picture temporally situated before the second picture; and means for encoding a third picture of one of said plurality pictures as a second 1o inter coding picture with the use of a reconstructed picture of a picture temporally situated before the third picture and a reconstructed picture of a picture temporally situated after the third picture, where at least one of said reconstructed picture of a picture temporally situated before the third picture and said reconstructed picture of a picture temporally situated after the third picture is a reconstructed picture of said second picture. 33. The encoding apparatus according to claim 32, wherein said picture has a picture header, said apparatus further comprising means for transmitting information in each picture header identifying which of said intra coding picture, said first inter coding picture and said second inter coding picture. 34. The encoding apparatus according to claim 32, wherein said reconstructed picture of a picture temporally situated before the second picture is a reconstructed picture of said first picture. The encoding apparatus according to claim 32, wherein said reconstructed picture of a picture temporally situated before the second picture is a reconstructed picture of another picture coded as a first inter coding picture, 36. The encoding apparatus according to claim 32, wherein said reconstructed picture of a picture temporally situated before the third picture is a reconstructed picture of said first picture and said reconstructed picture of a picture c lL YT (9C IN \%LBCCIOO S 1 m -130- temporally situated after the third picture is a reconstructed picture of said second picture. 37. The encoding apparatus according to claim 32, wherein said reconstructed picture of a picture temporally situated befbre the third picture and said reconstructed picture of a picture temporally situated after the third picture are reconstructed pictures of pictures coded as first inter coding pictures. 38. The encoding apparatus according to claim 32, wherein said reconstructed picture of a picture temporally situated before the third picture is a reconstructed picture of said second picture and said reconstructed picture of a picture temporally situated after the third picture is a reconstructed picture of said first picture. 39. The encoding apparatus according to claim 32, wherein said means for encoding a second picture, said second picture being divided into a plurality of macro unit blocks, each macro unit block having a macro unit block header, comprises: 5 means for selecting one of intra coding and inter coding for each macro unit 15 block, means for encoding said each macro unit block using the selected one of said intra coding and said inter coding, and means for transmitting information in each macro unit block header identifying which of said intra coding and said inter coding. 40. The encoding apparatus according to claim 32, wherein said means for encoding a third picture, said third picture being divided into a plurality of macro unit blocks, each macro unit block having a macro unit block header, comprises: means for selecting one of intra coding, forward predictive coding, backward predictive coding and interpolative predictive coding for each macro unit block, means for encoding said each macro unit block using the selected one of said intra coding, said forward predictive coding, said backward predictive coding and said interpolative predictive coding, and IN \LIBCCIO0515 rti -131 means for transmitting information in each macro unit block header identifying which of said intra coding, said forward predictive coding, said backward predictive coding, and said interpolative predictive coding. 41. The encoding apparatus according to claim 32, further comprising: for quantizing data based on said first picture, said second picture and said third pictures, said pictures each being divided into a plurality of macro unit blocks, each macro unit block having a macro unit block header, and means for transmitting inf-'mation in each macro unit block header identifying a quantization step size for each mav-o unit block. 42. A decoding apparatus for decoding a coded data produced by encoding a digital video signal including a plurality of pictures, comprising: means for decoding a coded data of a first picture encoded as an intra coding picture to produce a first decoded picture; means for decoding a coded data of a second picture encoded as a first inter 15 coding picture with the use of a decoded picture of a picture temporally situated before the second picture to produce a second decoded picture; and means for decoding a coded data of a third picture encoded as a second inter coding picture with the use of a decoded picture of a picture temporally situated before :ii: the third picture and a decoded picture of a picture temporally situated after the third 20 picture to produce a third decoded picture, where at least one of said decoded picture of a picture temporally situated before the third picture and said decoded picture of a picture temporally situated after the third picture is said second decoded picture. 43. The decoding apparatus according to claim 42, wherein each picture has a picture header, said apparatus further comprising means for separating information from each picture header identifying which of said intra coding picture, said first inter coding picture and said second inter coding picture. 44. The decoding apparatus according to claim 42, wherein s'i4" decoded R picture of a picture temporally situated before the second picture is said first decoded picture. IN:\LIBCC100515:mxI -132- The decoding apparatus according to claim 42, wherein said decoded picture of a picture temporally situated before the second picture is a decoded picture of another picture encoded as a first inter coding picture. 46. The decoding apparatus according to claim 42, wherein said decoded picture of a picture temporally situated before the third picture is said first decoded picture and said decoded picture of a picture temporally situated after the third picture is said second decoded picture. 47. The decoding apparatus according to claim 42, wherein said decoded picture of a picture temporally situated before the third picture and said decoded picture of a picture temporally situated after the third picture are decoded pictures of pictures coded as first inter coding pictures. 48. The decoding apparatus according to claim 42, wherein said decoded picture of a picture temporally situated before the third picture is said second decoded picture and said decoded picture of a picture temporally situated after the third picture s is said first decoded picture. 49. The decoding apparatus according to claim 42, wherein said means for decoding a coded data of a second picture, said second picture being divided into a plurality of macro unit blocks, each macro unit block having a macro unit block header, •comprises: 20 means for separating information from each macro unit block header identifying which of intra coding and inter coding, and means for decoding said each macro unit block according to the one of said intra coding and said inter coding. The decoding apparatus according to claim 42, wherein said means for decoding a coded data of a third picture, said third picture being divided into a plurality of macro unit blocks, each macro unit block having a macro unit block header, comprises: RA IL OIm r' j [N:\LIBCCIO0515:taxi -133- means for separating information from each macro unit block header identifying which of intra coding, forward predictive coding, backward predictive coding .ii, J interpolative predictive coding, and means for decoding said each macro unit block according to the one of said intra coding, said forward predictive coding, said backward predictive coding and said interpolative predictive coding. 51. The decoding apparatus according to claim 42, wherein said first, second and third pictures each are divided into a plurality of macro unit blocks, each of said macro unit blocks having a macro unit block header, further comprising: means for separating information from each macro unit block header identifying a quantization step size for each macro unit block, and means for inverse quantizing said coded data of said first picture, said second picture and said third picture. 52. An encoding apparatus for encoding a picture temporally situated 15 between a first picture and a second picture, said first and second pictures each being divided into a plurality of macro unit blocks, each macro unit block having a macro unit block header, comprising: means for selecting one of intra coding, forwarding predictive coding, backward predictive coding and interpolative predictive coding for each macro unit 20 block; means for encoding said each macro unit block using the selected one of said intra coding, said forward predictive coding, said backward predictive coding and said interpolative predictive coding; and means for transmitting information in each macro unit block header identifying which of said intra coding, said forward predictive coding, said backward predictive coding and said interpolative predictive coding. 53. The encoding apparatus according to claim 52, further comprising: means for quantizing data based on said picture temporally situated between a 1 first picture and a second picture, and IN:\LIBCC)00515:mxl -134- means for transmitting information in said each macro unit block header identifying a quantization step size for said each macro unit block. 54. A decoding apparatus for decoding a coded data produced by encoding a picture temporally situated between a first picture and a second picture, said first and second pictures being divided into a plurality of macro unit blocks, each macro unit block having a macro unit block header, comprising: means for separating information from each macro unit block header identifying which of intra coding forward predictive coding, backward predictive coding and interpolative coding; and means for decoding said each macro unit block according to the one of said intra coding, said forward predictive coding, said backward predictive coding and said interpolative predictive coding. 55. The decoding apparatus according to claim 54, further comprising: means for separating information from said each macro unit block header 15 identifying a quantization step size for said each macro unit block, and means for inverse quantizing said coded data of said picture temporally situated between a first picture and a second picture. 56. An encoding apparatus for encoding a digital video signal including a plurality of pictures, comprising: oo. 20 means for reordering said plurality of pictures, wherein each picture has a picture header; :o oo means for encoding said reordered plurality pictures as an intra coding picture or an inter coding picture respectively to produce a coded data; and means for appending temporal information to each picture header of each coded data identifying an input order of said plurality of pictures. 57. A decoding apparatus for decoding a coded picture produced by encoding a digital video signal including a plurality of pictures, wherein each picture has a picture header, said apparatus comprising: [N:\LIBCCIOO515:mxl -135- means for separating temporal information from each picture header of said coded data identifying an input order of said plurality of pictures in an encoder; means for decoding said coded data encoded as an intra coding picture or an inter coding picture to produce a plurality of decoded pictures; means for reordering said plurality of decoded pictures according to said temporal information. 58. An encoding apparatus for encoding a motion vector, comprising: said first and second pictures being divided into a plurality of macro unit blocks, each macro unit block having a macro unit block header, means for detecting a motion vector between a first picture and a second picture; means for dividing a value based on said motion vector by a predetermined value to produce a quotient value and a remainder value; means for variable length encoding said quotient value with the use of a table 1 i of variable length code; and means for encoding said remainder value. 59. The encoding apparatus according to claim 58, further comprising means for encoding information indicating the number of bits of said remainder value. 60. A decoding apparatus for decoding a coded data to produce a motion 20 vector, comprising: means for separating a motion vector data including a quotient data and a remainder data from said coded data; and means for decoding said motion vector data with the use of a table of variable length code. 61. The decoding apparatus according to claim 60, wherein said motion vector data includes information indicating the number of bits of said remainder data. 62. A motion vector detection apparatus for detecting a motion vectors between a plurality of pictures, comprising: IN:\LIBCCI00515:mxI
136- means for detecting a first motion vector between a first picture and a third picture temporally situated between the first picture and a second picture; means for determining a search area of a second motion vector between said first picture and said second picture based on said first motion vector and an interval between said first picture and said second picture; and means for detecting said second motion vector according to said search area. 63. An encoding method for encoding a digital video signal including a plurality of pictures substantially as hereinbefore described with reference to Figs. 1 to of the accompanying drawings. 64. A decoding method for decoding a coded data produced by encoding a digital video signal including a plurality of pictures substantially as hereinbefore described with reference to Figs. 1 to 35 of the accompanying drawings. :65. An encoding method for encoding a picture being divided into a plurality of macro unit blocks, wherein each macro unit block has a macro unit block 15 header, and being temporally situated between a first picture and a second picture, substantially as hereinbefore described with reference to Figs. 1 to 35 of the accompanying drawings. "66. A decoding method for decoding a coded data produced by encoding a picture temporally situated between a first picture and a second picture, substantially as 20 hereinbefore described with reference to Figs. 1 to 35 of the accompanying drawings. 67. An encoding method for encoding a motion vector substantially as hereinbefore described with reference to Figs. 1 to 35 of the accompanying drawings. 68. A decoding method for decoding a coded data to produce a motion vector substantially as hereinbefore described with reference to Figs. 1 to 35 of the accompanying drawings. 69. An encoding apparatus for encoding a digital video signal including a plurality of pictures substantially as hereinbefore described with reference to Figs. 1 to of the a,:n'panying drawings. IN:\LIBCC00515:mx
137- A decoding apparatus for decoding a coded data produced by encoding a digital video signal including a plurality of pictures substantially as hereinbefore described with reference to Figs. 1 to 35 of the accompanying drawings. 71. An encoding apparatus for encoding a picture temporally situated between a first picture and a second picture, said first and second pictures each being divided into a plurality of macro unit blocks, each macro unit block having a macro unit block header, substantially as hereinbefore described with reference to Figs. 1 to of the accompanying drawings. 72. A decoding apparatus for decoding a coded data produced by encoding 1 o a picture temporally situated between a first picture and a second picture, said first and second pictures being divided into a plurality of macro unit blocks, each macro unit block having a macro unit block header, substantially as hereinbefore described with reference to Figs. 1 to 35 of the accompanying drawings. DATED this Twenty-ninth Day of March 1996 15 Sony Corporation Patent Attorneys for the Applicant SPRUSON FERGUSON o* a e IN:\LIBCCI00515:mxl
AU63362/94A 1989-10-14 1994-05-26 Video signal transmitting system Expired AU669983B2 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP1-267044 1989-10-14
JP26704489A JP3159310B2 (en) 1989-10-14 1989-10-14 Video signal encoding method and video signal encoding device
JP1-267045 1989-10-14
JP1-267046 1989-10-14

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
AU64581/90A Division AU6458190A (en) 1989-10-14 1990-10-12 Video signal transmitting system

Publications (2)

Publication Number Publication Date
AU6336294A AU6336294A (en) 1994-07-21
AU669983B2 true AU669983B2 (en) 1996-06-27

Family

ID=17439263

Family Applications (1)

Application Number Title Priority Date Filing Date
AU63362/94A Expired AU669983B2 (en) 1989-10-14 1994-05-26 Video signal transmitting system

Country Status (2)

Country Link
JP (1) JP3159310B2 (en)
AU (1) AU669983B2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05236466A (en) * 1992-02-25 1993-09-10 Nec Corp Device and method for inter-frame predictive image encoding for motion compensation
DE69733007T2 (en) 1996-02-07 2006-02-16 Sharp K.K. DEVICE FOR CODING AND DECODING MOTION IMAGES

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4969055A (en) * 1984-10-10 1990-11-06 Oberjatzas Guenter Method for recording and/or reproducing digitally coded signals with interframe and interframe coding
US4985768A (en) * 1989-01-20 1991-01-15 Victor Company Of Japan, Ltd. Inter-frame predictive encoding system with encoded and transmitted prediction error
US5049991A (en) * 1989-02-20 1991-09-17 Victor Company Of Japan, Ltd. Movement compensation predictive coding/decoding method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5382219A (en) * 1976-12-28 1978-07-20 Nec Corp Television signal coding unit
JPS62145988A (en) * 1985-12-20 1987-06-30 Fujitsu Ltd Transmission system for picture applied with adoptive scanning-line conversion
JPS62193383A (en) * 1986-02-20 1987-08-25 Kokusai Denshin Denwa Co Ltd <Kdd> Moving image signal transmitting system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4969055A (en) * 1984-10-10 1990-11-06 Oberjatzas Guenter Method for recording and/or reproducing digitally coded signals with interframe and interframe coding
US4985768A (en) * 1989-01-20 1991-01-15 Victor Company Of Japan, Ltd. Inter-frame predictive encoding system with encoded and transmitted prediction error
US5049991A (en) * 1989-02-20 1991-09-17 Victor Company Of Japan, Ltd. Movement compensation predictive coding/decoding method

Also Published As

Publication number Publication date
JP3159310B2 (en) 2001-04-23
JPH03129985A (en) 1991-06-03
AU6336294A (en) 1994-07-21

Similar Documents

Publication Publication Date Title
EP0424026B1 (en) Video signal transmitting system and method
US4785349A (en) Digital video decompression system
AU638896B2 (en) Digital video transmission system
EP1261208B1 (en) Encoding continuous image data
EP0584840B1 (en) Apparatus for adaptive interframe predictive decoding of a video signal
US4868653A (en) Adaptive digital video compression system
US5225904A (en) Adaptive digital video compression system
US5122873A (en) Method and apparatus for selectively encoding and decoding a digital motion video signal at multiple resolution levels
US5079630A (en) Adaptive video compression system
US5798796A (en) Coding priority and non-priority image data to be less than a predetermined quantity
US5194944A (en) Image signal processing apparatus for successively transferring a series of color signals obtained as image signals
JP2712645B2 (en) Motion vector transmission method and apparatus, and motion vector decoding method and apparatus
AU669983B2 (en) Video signal transmitting system
CA2027659C (en) Video signal transmitting system
JP3251286B2 (en) Method and encoder and decoder for digital transmission and / or recording of component encoded color television signals
KR100233419B1 (en) Motion vector transmitting method, motion vector transmitting apparatus, motion vector decoding method and motion vector decoding apparatus
JP2959714B2 (en) Motion vector detection method
US6233281B1 (en) Video signal processing apparatus, and video signal processing method
AU625594B2 (en) Digital video decompression system
AU624089B2 (en) Digital video compression system
JPH03129984A (en) Moving vector detecting circuit
JP2929591B2 (en) Image coding device
JPH08331510A (en) Video signal recording medium
JPH08317407A (en) Method and device for video signal transmission
JPH08317408A (en) Method and device for decoding video signal