WO2007058113A1 - Signal processing device for receiving digital broadcast, signal processing method and signal processing program, and digital broadcast receiver - Google Patents

Signal processing device for receiving digital broadcast, signal processing method and signal processing program, and digital broadcast receiver Download PDF

Info

Publication number
WO2007058113A1
WO2007058113A1 PCT/JP2006/322366 JP2006322366W WO2007058113A1 WO 2007058113 A1 WO2007058113 A1 WO 2007058113A1 JP 2006322366 W JP2006322366 W JP 2006322366W WO 2007058113 A1 WO2007058113 A1 WO 2007058113A1
Authority
WO
WIPO (PCT)
Prior art keywords
signal
digital broadcast
decoded image
image signal
signal processing
Prior art date
Application number
PCT/JP2006/322366
Other languages
French (fr)
Japanese (ja)
Inventor
Kenji Mito
Yoshitaka Tanaka
Original Assignee
Pioneer Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pioneer Corporation filed Critical Pioneer Corporation
Priority to JP2007545214A priority Critical patent/JP4740955B2/en
Publication of WO2007058113A1 publication Critical patent/WO2007058113A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/85Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
    • H04N19/89Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder

Definitions

  • Digital broadcast reception signal processing apparatus signal processing method, signal processing program, and digital broadcast reception apparatus
  • the present invention relates to a technique for processing a plurality of digital broadcast reception signals, and particularly to a technique for processing two types of digital broadcast reception signals transmitted by simulcast.
  • Japanese terrestrial digital broadcasting is defined by ISDB_T (Integrated Services Digital Broadcasting Terrestrial) standard, and employs OFDM (Orthogonal Frequency Division Multiplexing) modulation and hierarchical transmission schemes.
  • ISDB-T Integrated Services Digital Broadcasting Terrestrial
  • OFDM Orthogonal Frequency Division Multiplexing
  • the transmission bandwidth of one channel is divided into 13 subbands, that is, OFDM segments S to S. Segment
  • S to S can be further divided into a maximum of three gnoles or hierarchies. Every level
  • different transmission characteristics such as carrier modulation scheme, inner code coding rate, and time interleave length can be set, and different broadcast programs can be transmitted for each layer. For example, 12 OFDM segments S to S, S to S
  • High-definition television High Definition Television
  • Standard Definition Television can provide digital broadcasting for mobiles (simple video broadcasting) using only one segment S located in the center.
  • MPEG2 Motion Picture Experts Group phase 2
  • H.264 MPEG4 AVC
  • Patent Document 1 Japanese Unexamined Patent Application Publication No. 2004-312361
  • Patent Document 2 JP 2004-166173 A
  • an object of the present invention is to provide a signal processing device for digital transmission / reception capable of generating a display image as high as possible and as natural as possible from a plurality of digital broadcast reception signals provided by simulcast, A signal processing method, a signal processing program, and a digital broadcast receiving apparatus are provided.
  • a signal processing apparatus is a digital broadcast reception signal processing apparatus that processes reception signals of both the first digital broadcast and the second digital broadcast transmitted by simulcast.
  • the signal processing apparatus includes: a first decoder that decodes a received signal of the first digital broadcast according to a first encoding standard to generate a first decoded image signal; and the first encoding standard.
  • a second decoder that decodes a received signal of the second digital broadcast according to a different second encoding standard to generate a second decoded image signal; the first decoded image signal; and the second decoded image And a signal output unit that generates an output image signal based on the signal and outputs the output image signal, and the first decoder
  • an error is detected in units of a pixel block including a predetermined number of pixels, and the signal output unit outputs a pixel block of the first decoded image signal when the error is not detected.
  • the output image signal is generated by outputting the corresponding pixel block in the second decoded image signal instead of the pixel block in which the error is detected. is there.
  • a digital broadcast receiving apparatus is a digital broadcast receiving apparatus that receives a first digital broadcast and a second digital broadcast transmitted by simulcast.
  • the digital broadcast receiving apparatus includes: a reception circuit that supplies reception signals of the first and second digital broadcasts; and a reception circuit that decodes the reception signals of the first digital broadcast according to a first encoding standard.
  • a first decoder that generates a decoded image signal and a second decoded image signal by decoding the received signal of the second digital broadcast according to a second encoding standard different from the first encoding standard
  • a signal output unit that generates an output image signal based on the first decoded image signal and the second decoded image signal and outputs the output image signal
  • the first decoder In the decoding process of the received signal, an error is detected in units of pixel blocks including a predetermined number of pixels, and the signal output unit outputs the pixel block of the first decoded image signal when the error is not detected. While the above error When There is detected, instead of the pixel block in which the error is detected, and generates the output image signal by outputting a corresponding pixel block in the second decoded image signal.
  • a signal processing method is a digital broadcast reception signal processing method for processing received signals of both the first digital broadcast and the second digital broadcast transmitted by simulcast.
  • the signal processing method includes: (a) a step of decoding a reception signal of the first digital broadcast according to a first coding standard to generate a first decoded image signal; and (b) the first decoding image signal.
  • a signal processing program is a signal for digital broadcast reception that causes a processor to execute processing of received signals of both the first digital broadcast and the second digital broadcast transmitted by simulcast. It is a processing program.
  • the process is (a) a first decoding process for generating a first decoded image signal by decoding a received signal of the first digital broadcast in accordance with a first code standard. And (b) a second decoding that decodes the received signal of the second digital broadcast according to a second encoding standard different from the first encoding standard and generates a second decoded image signal.
  • FIG. 1 is a diagram showing 13 OFDM segments.
  • FIG. 2 is a functional block diagram schematically showing a configuration of a digital broadcast receiving apparatus according to an embodiment of the present invention.
  • FIG. 3 is a functional block diagram showing an example of a schematic configuration of a delay detection unit.
  • FIG. 4 is a functional block diagram showing another example of the schematic configuration of the delay detection unit.
  • FIG. 5 is a flowchart for explaining a delay amount detection process.
  • FIG. 6 is a flowchart for explaining a delay amount detection process.
  • FIG. 7 is a diagram for explaining a delay amount detection process.
  • FIG. 8 is a functional block diagram showing a schematic configuration of a signal output unit.
  • FIG. 9 is a diagram schematically showing an output image.
  • FIG. 10 is a flowchart for explaining motion detection processing.
  • FIG. 2 is a functional block diagram schematically showing the configuration of the digital broadcast receiver 1 according to the embodiment of the present invention.
  • This digital broadcast receiving apparatus 1 includes an antenna 10, a receiving circuit (front end) 2, and a signal processing circuit 3.
  • the reception circuit 2 includes a tuner 11 and a demodulation circuit 12.
  • the signal processing circuit 3 includes a demultiplexer (DMUX), a first decoder 13, a second decoder 15, delay units 14 and 15, a delay detection unit 17, and a signal output unit 18.
  • DMUX demultiplexer
  • tuner 11 selects a broadcast wave to be received by antenna 10 and frequency-converts the received broadcast wave to generate an OFDM signal.
  • the demodulator circuit 12 generates an OFDM demodulated signal by performing FFT (Fast Fourier Transform) on the OFDM signal of the tuner 11 power, and further performs decoding processing such as deinterleaving, demapping and error correction on the OFDM demodulated signal.
  • FFT Fast Fourier Transform
  • decoding processing such as deinterleaving, demapping and error correction on the OFDM demodulated signal.
  • DMUX demultiplexer
  • the elementary stream ES2 and the TS error information TSe are separated, the first elementary stream ES1 is supplied to the first decoder 13, and the second elementary stream ES2 is supplied to the second decoder 15.
  • the first elementary stream ES1 is a bit string compliant with the MPEG2 standard
  • the second elementary stream ES2 is a bit string compliant with the H.264 standard.
  • the demodulation circuit 12 detects an uncorrectable error in units of TS packets in the course of error correction processing, and includes TS error information TSe indicating the position where this error occurs in the transport stream TS.
  • TS error information TSe indicating the position where this error occurs in the transport stream TS.
  • DMUX Supply to demultiplexer
  • the first decoder 13 decodes the first elementary stream ES1 according to the MPEG2 coding standard to generate a decoded image signal DS1.
  • the first decoder 13 generates macroblock error information MBe using the TS error information TSe, and supplies this information MBe to the signal output unit 18.
  • Macroblock error information MBe is information indicating the presence or absence of an error for each macroblock.
  • a macroblock usually consists of 16 x 16 or 8 x 8 pixels.
  • the pixel block of the present invention may be a macroblock itself or a subblock obtained by further dividing the macroblock.
  • the first decoder 13 supplies the motion detection unit 19 with motion amount information MV1 including data indicating the presence or absence of skipped macroblocks and motion vectors.
  • the second decoder 15 decodes the second elementary stream ES2 in accordance with the H.264 coding standard to generate a decoded image signal DS2. Further, the second decoder 15 supplies motion amount information MV2 including data indicating the presence / absence of a skip macroblock and a motion vector to the motion detection unit 19.
  • the first decoder 13 and the second decoder 15 respectively have a frame rate, an image size (horizontal resolution and vertical resolution), and a pan scan (PANSCAN) of the decoded image signal DS1 and the decoded image signal DS2.
  • Image parameters PA1 and PA2 such as a designation flag are supplied to the delay detection unit 17 and the signal output unit 18.
  • the first delay unit 14 delays the decoded image signal DS1 from the first decoder 13 by the first delay amount DV1 supplied from the delay detection unit 17, and generates a delayed image signal LSI.
  • the LSI is supplied to the signal output unit 18.
  • the second delay unit 16 delays the decoded image signal DS2 from the second decoder 15 by the first delay amount DV2 supplied from the delay detection unit 17.
  • a delayed image signal LS2 is generated, and this signal LS2 is supplied to the signal output unit 18.
  • the signal output unit 18 generates an output image signal CS based on the delayed image signal LSI and the delayed image signal LS2 in accordance with the macroblock error information MBe. The configuration of the signal output unit 18 will be described later.
  • the motion detector 19 moves the amount of motion of the image (hereinafter referred to as MPEG image) represented by the decoded image signal DS1 and the motion of the image (hereinafter referred to as H.264 image) represented by the decoded image signal DS2.
  • the detection result MD is supplied to the delay detection unit 17.
  • the delay detection unit 17 uses the detection result MD to detect a synchronization shift between the decoded image signal DS1 and the decoded image signal DS2, and compensates for the synchronization shift by using the first delay amount DV1 and the second delay amount. Delay amount DV2 is generated.
  • Each configuration of the motion detector 19 and the delay detector 17 will be described later.
  • FIG. 3 is a functional block diagram illustrating an example of a schematic configuration of the delay detection unit 17.
  • the delay detection unit 17 includes an image size detection unit 20, a down converter (resolution conversion unit) 21, a first image extraction unit 22, a second image extraction unit 23, a match detection unit 24, and a frame rate detection. Part 25 and a delay amount table 26.
  • the image size detection unit 20 calculates the image size of the MPEG image from the image parameters PA1 and PA2.
  • the down-converter 21 converts the resolution of the decoded image signal DS1 at a conversion ratio DR, that is, thins out the decoded image signal DS1 to generate a converted image signal DC1.
  • the image size of the converted image signal DC1 matches that of the decoded image signal DS2.
  • MPEG image is converted to high angle rate image 1920 ⁇ 1080 power H.264 image angle image 320 ⁇ 180.
  • the image size detection unit 20 acquires information AR indicating the aspect ratio of the MPEG image and the aspect ratio of the H.264 image, such as the image parameters PA1 and PA2, and the information AR. Supplied to match detection unit 24.
  • the data of this frame rate ratio FR It is supplied to the outlet 22 and the coincidence detector 24, respectively. For example, if the frame rate of an MPEG image is 60 fps (frames per second) or 30 fps, and the frame rate of an H.264 image is 15 fps, the frame rate ratio FR is an integer of 4 or 2.
  • the first image extraction unit 22 extracts a number of reference frames corresponding to the frame rate ratio FR from a series of image frames constituting the converted image signal DC1, and extracts one or more reference frames SD1. Supplied to the coincidence detector 24.
  • the second image extraction unit 23 uses the delay information TB supplied from the delay amount table 26 to extract a plurality of reference frames from the middle force of a series of image frames constituting the decoded image signal DS1. These reference frames SD2 are given to the coincidence detection unit 24.
  • the delay amount table 26 stores data of delay times corresponding to such synchronization loss occurring at each broadcasting station.
  • the delay amount table 26 is supplied with a reception channel information CH indicating a currently selected channel from a control circuit (not shown).
  • the delay amount table 26 corresponds to the reception channel information CH.
  • the data TB of the delay time is supplied to the second image extraction unit 23.
  • the coincidence detection unit 24 compares the reference frames SD1 and SD2, and searches for a combination of reference frames that minimizes the information amount difference between the reference frames SD1 and SD2.
  • the coincidence detection unit 24 can determine whether or not there is a motion in both the MPEG image and the H.264 image according to the detection result MD by the motion detection unit 19.
  • the coincidence detection unit 24 has a function of interrupting the search process because the coincidence accuracy of the reference frame is low when there is almost no movement in both the MPEG image and the H.264 image.
  • FIG. 4 is a functional block diagram showing another example of the schematic configuration of the delay detection unit 17. In the delay detection unit 17 shown in FIG. 3, the resolution of the MPEG image is converted to that of the H.264 image by the downconverter 21. In the delay detection unit 17 shown in FIG. The resolution is converted to that of an MPEG image.
  • the first image extraction unit 22A extracts the reference frame SD1 from the series of image frames constituting the decoded image signal DS1, and the second image extraction unit 23B converts the converted image supplied from the upconverter 27.
  • the reference frame SD2 is extracted from the series of image frames that make up the signal DC1.
  • the coincidence detection unit 24A executes search processing using the reference frames SD1 and SD2 having high resolution of the MPEG image.
  • the other configuration is the same as the configuration of the delay detection unit 17 shown in FIG.
  • FIG. 3 The flowcharts of Fig. 5 and Fig. 6 are connected to each other via connectors CI and C2.
  • a control circuit determines whether or not simulcast is received. If simulcast is not received, the delay amount detection process is terminated. Let For example, a control circuit (not shown) decodes the transport stream for data separated by the demodulation circuit 12 to obtain EPG (electronic program guide) information, and receives simulcast based on this EPG information. It can be determined whether or not.
  • EPG electronic program guide
  • the image size detection unit 20 also obtains the image size of the MPEG image and the image size of the H.264 image for the image parameters PA1 and PA2, and calculates the ratio of these image sizes. (Step S2).
  • the image size detection unit 20 supplies the conversion ratio DR, which is a ratio of the image size, to the down converter 21, and the down converter 21 converts the resolution of the MPEG image (decoded image signal DS1) at a magnification according to the conversion ratio DR.
  • the converted image signal DC1 is generated by converting to the resolution of H.264 image (step S3).
  • the image size detection unit 20 acquires the aspect ratio of the MPEG image and the aspect ratio of the H.264 image from the image parameters PA1 and PA2 (step S4), and the data of the aspect ratio AR is obtained. It is given to the coincidence detection unit 24. The coincidence detection unit 24 determines whether these aspect ratios are the same (step S5). If the aspect ratios are not the same, the delay amount detection process is performed. Terminate.
  • the frame rate detection unit 25 calculates the frame rate of the MPEG image from the image parameters PA1 and PA2. And the frame rate of the H.264 image are detected (step S6). The frame rate detection unit 25 further determines whether or not the frame rate of the H.264 image is a fixed rate (step S7). If the frame rate of the H.264 image is variable, the delay amount detection process ends.
  • the frame rate detection unit 25 calculates the ratio FR between the frame rate of the MPEG image and the frame rate of the H.264 image.
  • the frame rate ratio FR is calculated (step S8) and supplied to the first image extraction unit 22 and the coincidence detection unit 24.
  • the coincidence detection unit 24 determines whether or not the frame rate ratio FR is an integer ratio (step S9). If the frame rate ratio FR is not an integer ratio, the delay amount detection process is terminated.
  • the motion detector 19 calculates the motion amount of the MPEG image based on the motion amount information MV1 or the decoded image signal DS1 supplied from the first decoder 13. To detect. Next, the motion detector 19 determines whether or not the motion amount of the MPEG image is outside the predetermined range, that is, whether or not there is sufficient motion in the MPEG image to detect the delay amount (step Sl l). ). When the motion amount of the MPEG image is not outside the predetermined range, the motion detection unit 19 returns the process to step S10.
  • the motion detection unit 19 determines the H.264 image based on the motion amount information MV2 or the decoded image signal DS2 supplied from the second decoder 15. The amount of movement is detected (step S12).
  • the motion detection unit 19 determines whether or not the motion amount of the H.264 image is outside the predetermined range. That is, the motion detection unit 19 determines whether or not the H.264 image has sufficient motion to detect the delay amount. (Step S13).
  • the motion detection unit 19 returns the process to step S10 when the motion amount of the H.264 image is not outside the predetermined range, and performs the process to the next step S14 when the motion amount of the H.264 image exceeds the predetermined range. Transition.
  • the detection result MD by the motion detector 19 is supplied to the coincidence detector 24 (FIG. 3).
  • the motion detection unit 19 detects MPEG images and H.264 images that should detect an accurate delay amount. If the amount of movement of both images exceeds a certain range, the next step
  • the first image extraction unit 22 extracts M (M is a positive integer) reference frames SD 1 from the MPEG image, and provides these reference frames SD 1 to the coincidence detection unit 24.
  • the number M of reference frames SD1 is determined based on the frame rate ratio FR between the MPEG image and the H.264 image. For example, when the frame rate ratio FR force S is “2”, it is possible to extract two (where is a positive integer) reference frames SD1.
  • the second image extraction unit 23 uses the delay information TB indicating the delay amount ⁇ t specific to the selected broadcast station as the delay amount. Obtained from table 26 (step S15). Subsequently, the second image extraction unit 23 determines whether the current time t
  • step S16 Is set (step S16).
  • the time window will be described with reference to FIG. FIG. 7 shows a series of image frames MO, Ml, M2,... Constituting an MPEG image and a series of image frames HO, HI, H2,. These MPEG images and H.264 images are not synchronized with each other.
  • the two reference frames M5 and M6 that make up are sampled.
  • the time window of length T is set around the time t when the current time t force is also delayed by the delay amount At.
  • the second image extraction unit 23 extracts N (N is a positive integer) image frames included in the time window as reference frames, and supplies these reference frames to the coincidence detection unit 24 (step). S17).
  • N is a positive integer
  • the coincidence detection unit 24 step 17.
  • three reference frames HI, H2, and H3 included in the time window of length T are generated.
  • the coincidence detection unit 24 searches for a combination that minimizes the difference between the reference frame SD1 from the first image extraction unit 22 and the reference frame SD2 from the second image extraction unit 23 (step). S18).
  • the combination (Mx, Hx) of the reference frame Mx of the MPEG image and the reference frame Hx of the H.264 image is (M5, Hl), (M5, H2), (M5, H3) , (M6 , Hl), (M6, H2), and (M6, H3).
  • the combination that minimizes the difference in the amount of information between the reference frames is searched.
  • the luminance difference between the reference frames for each combination may be calculated for each pixel, and the sum of these luminance differences may be calculated to search for a combination that minimizes the sum.
  • the match detection unit 24 determines whether or not the search is successful (step S19). For example, if the sum of the luminance differences exceeds a predetermined threshold or if the correlation is less than a predetermined threshold, it is possible to determine that the search between the reference frames is low and the search has failed. .
  • the coincidence detection unit 24 enlarges the time window length T in order to increase the number N of reference frames (step S21), and then returns to step S17.
  • the match detection unit 24 determines that the two reference frames of the combination obtained as a result of the search match, and calculates the delay amounts DVl and DV2 based on the time difference between these reference frames ( Step S20). In the example of FIG.
  • the delay amounts DVl and DV2 may be set as appropriate so as to compensate for the time difference between the reference frames. Thus, the delay calculation process ends.
  • the coincidence detection unit 24 performs a process of obtaining a difference between the reference frames by calculating a luminance difference and a correlation for all the pixels between the reference frames.
  • the luminance difference or correlation may be calculated only for the pixels in the central area excluding the edge image areas at the upper and lower ends or the left and right edges of the image of the reference frame to be improved.
  • the delay amounts DVl and DV2 calculated as described above are supplied to the first delay unit 14 and the second delay unit 16, respectively.
  • the first delay unit 14 adds the first delay amount DV1 to the time stamp value of the MPEG image, and supplies the delayed image signal LSI to the signal output unit 18 at the timing of the added value.
  • the second delay unit 16 adds the second delay amount DV2 to the time stamp value of the H.264 image, and supplies the delayed image signal LS2 to the signal output unit 18 at the timing based on the added value.
  • PTS Presentation Time Stamp
  • the delay units 14 and 16 delay the decoded image signals DS1 and DS2 according to the delay amounts DV1 and DV2, respectively.
  • the delay unit 14 may be arranged before the first decoder 13 to adjust the time for decoding the elementary stream ES1
  • the delay unit 16 may be arranged before the second decoder 15. The time for decoding the elementary list ES2 may be adjusted.
  • a configuration may be adopted in which the delay units 14 and 16 are removed from the signal processing circuit 3 and the delay detection unit 17 adjusts the decoding times of the decoders 13 and 15, respectively.
  • the delay detection unit 17 does not need to calculate the delay amounts DV1 and DV2.
  • the delay amounts DV1 and DV2 may be calculated periodically in a reception environment in which the synchronization error between the decoded image signals DS1 and DS2 may fluctuate.
  • FIG. 8 is a functional block diagram showing a schematic configuration of the signal output unit 18.
  • the signal output unit 18 includes an up-converter 30, a frame interpolation unit 31, an image division unit 32, and a pixel block selection unit 33.
  • Image parameters PA1 and PA2 are supplied from the first decoder 13 and the second decoder 15 to the up-converter (resolution conversion unit) 30, the frame interpolation unit 31 and the image division unit 32, respectively.
  • the up-converter 30 is a pixel interpolation block that converts the resolution of the delayed image signal LS2 into the resolution of the delayed image signal LSI and matches the image size of the delayed image signal LS2 with the image size of the delayed image signal LSI. is there.
  • the frame interpolation unit 31 is a frame rate conversion block that interpolates the output of the up-converter 30 and converts the frame rate of the delayed image signal LS2 to the frame rate of the delayed image signal LSI. Then, the image division unit 32 divides the output of the frame interpolation unit 31 into macro blocks and supplies the macro blocks to the pixel block selection unit 33.
  • the pixel block selection unit 33 selects one of the output signal of the image division unit 32 and the delayed image signal LSI in macroblock units according to the macroblock error information MBe, and outputs the selected signal as an output image. Supply as signal CS. Specifically, the pixel block selection unit 33, when there is no error in the macro block of the delayed image signal LSI, the macro block of the delayed image signal LSI is selected and output, while there is an error in the macro block of the delayed image signal LSI. In this case, instead of the macro block in which the error has occurred, the corresponding macro block in the output signal of the image dividing unit 32 is selected and output.
  • the picture 40 represented by the output image signal CS is a high-quality first macroblock group 41, 41, 41 for broadcasting to a fixed receiver, and for broadcasting to a mobile body.
  • a picture that is composed of low-quality macroblocks 42a and 42b and no errors occur at all is composed of only high-quality macroblocks for broadcasting to fixed receivers. It is possible to supply a good display image.
  • the delay detection unit 17 detects a synchronization shift between the MPEG image and the H.264 image and supplies delay amounts DV1 and DV2 for compensating for the synchronization shift to the delay units 14 and 16,
  • the delay units 14 and 16 can supply the signal output unit 18 with MPEG images and H.264 images synchronized with each other.
  • the motion detector 19 (Fig. 2) detects whether there is enough motion in the MPEG image and H.264 image to calculate the delay amount, and the detection result MD is detected. Supplied to the coincidence detector 24. Since the coincidence detection unit 24 generates the delay amounts DV1 and DV2 only when the detection result MD is positive, it is possible to reliably avoid the generation of the erroneous delay amounts DV1 and DV2.
  • the first calculation method is a method of calculating the sum of brightness differences between a plurality of temporally continuous image frames as a motion amount. Specifically, when three temporally continuous image frames P 1, P, and P are monitored, the motion detector 19 detects the image frames P, P,
  • the motion detector 19 determines that the motion amounts DB1 and DB2 are within the predetermined range only when the motion amount DB1 is equal to or greater than the predetermined threshold TH1 and the motion amount DB1 is equal to or greater than the predetermined threshold TH2. It is possible to determine that there is motion and the image is outside (steps SI 1 and SI 3; FIG. 6).
  • the second calculation method of the motion amount is a method using the number of skip macroblocks.
  • the macroblock power of the current image (current image) to be encoded Macroblock of the image temporally prior to the current image If the motion vector is zero, the macroblock is not encoded and is skipped.
  • Such skip macroblock information is acquired by the first decoder 13 and the second decoder 15. Therefore, the motion detection unit 19 obtains skip macroblock information from the image parameters PA1 and PA2 supplied from the first decoder 13 and the second decoder 15, and calculates the number of skip macroblocks as a motion amount. Can do.
  • the motion detection unit 19 determines that the amount of motion is outside the predetermined range and the image is moving ( Step Sl l, S13; Fig. 6) is possible.
  • the third calculation method of the motion amount is a method using a motion vector. This third calculation method will be described below with reference to the flowchart of FIG.
  • the motion detector 19 can acquire motion vector information from the image parameters PA1 and PA2. Referring to FIG. 10, in step S30, the motion detection unit 19 searches for a macroblock in which the norm of motion beta is greater than the threshold value V in one image frame. Macroblocks whose norm exceeds the threshold can be found
  • the motion detector 19 determines that the search has failed (step S31), further determines that the amount of motion of the image is within a predetermined range (step S37), and ends the motion detection process.
  • the motion detection unit 19 finds a macroblock whose norm exceeds the threshold, the motion detection unit 19 determines that the search is successful (step S31), and sets a local region centered on the searched macroblock (Ste S32). This local area may be set to an area including several to several hundred macroblocks.
  • the motion detection unit 19 counts the number of macroblocks in which the norm of the motion vector exceeds the threshold value V among the macroblocks in the local region (score).
  • Step S33 The motion detector 19 determines whether or not the count value is equal to or greater than the set value (step S34).
  • step S34 When it is determined that the count value is greater than or equal to the set value (step S34), the motion detection unit 19 moves a moving object (for example, an image of an object moving in a background that does not change) within the local region. Image) exists, the process proceeds to the next step S35. On the other hand, when it is determined that the count value is less than the set value (step S34), the motion detection unit 19 determines that there is no moving object in the local area, and the motion amount of the image is within a predetermined range. Is determined (step S37), and the motion detection process is terminated.
  • a moving object for example, an image of an object moving in a background that does not change
  • step S35 the motion detection unit 19 determines whether or not the angle of the motion vector of all macroblocks in the local region falls within a predetermined angle range (for example, within a range of about ⁇ 15 degrees). judge. If the angles of the motion vectors of all macroblocks are not within the predetermined angle range, the motion detection unit 19 determines that the amount of motion of the image is within the predetermined range (step S37), and ends the motion detection process. .
  • a predetermined angle range for example, within a range of about ⁇ 15 degrees.
  • the motion detection unit 19 determines that the moving object is moving in approximately one direction, and further, the amount of motion of the image is It is determined that the predetermined range is exceeded (step S36), and the motion detection process is terminated.
  • step S18 the match detection unit 24 searches for a combination that minimizes the difference in the amount of information between the reference frames for only the local region set in step S32. May be. As a result, it is not necessary to calculate the difference between the reference frames for regions other than the local region, so that the processing speed can be improved.
  • the configuration of the signal processing circuit 3 may be realized by hardware, or may be realized by a program or program code recorded on a recording medium such as an optical disk. Such a program or program code causes a processor such as a CPU to execute all or part of the functions of the signal processing circuit 3.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Circuits Of Receivers In General (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A signal processing device for generating as natural a displayed video as possible with as high a quality as possible from the received signals of digital broadcasts provided by simulcast. The signal processing device has first and second decoders and a signal output section. The first decoder detects an error in each pixel block composed of a predetermined number of pixels during decoding of the received signal. The signal output section outputs the pixel block of the decoded image signal supplied from the first decoder if no error is detected. If an error is detected, the signal output section outputs the pixel block of the second decoded image signal supplied from the second decoder, thus creating an output image signal.

Description

明 細 書  Specification
デジタル放送受信用の信号処理装置、信号処理方法および信号処理プ ログラム並びにデジタル放送受信装置  Digital broadcast reception signal processing apparatus, signal processing method, signal processing program, and digital broadcast reception apparatus
技術分野  Technical field
[0001] 本発明は、複数のデジタル放送の受信信号を処理する技術に関し、特に、サイマ ルキャストで送信された 2種類のデジタル放送の受信信号を処理する技術に関する。 背景技術  TECHNICAL FIELD [0001] The present invention relates to a technique for processing a plurality of digital broadcast reception signals, and particularly to a technique for processing two types of digital broadcast reception signals transmitted by simulcast. Background art
[0002] 日本の地上デジタル放送は、 ISDB _T (Integrated Services Digital Broadcasting Terrestrial)規格で規定され、 OFDM (直交周波数分割多重; Orthogonal Frequency Division Multiplexing)変調や階層伝送方式を採用している。 ISDB—T規格によれ ば、図 1に概略的に示すように、 1チャネルの伝送帯域幅(約 5. 6MHz)が 13個のサ ブバンドすなわち OFDMセグメント S〜S に分割されており、これら OFDMセグメン  [0002] Japanese terrestrial digital broadcasting is defined by ISDB_T (Integrated Services Digital Broadcasting Terrestrial) standard, and employs OFDM (Orthogonal Frequency Division Multiplexing) modulation and hierarchical transmission schemes. According to the ISDB-T standard, as schematically shown in Fig. 1, the transmission bandwidth of one channel (approximately 5.6 MHz) is divided into 13 subbands, that is, OFDM segments S to S. Segment
1 13  1 13
ト S〜S をさらに最大 3個のグノレープすなわち階層に分割することができる。階層毎 S to S can be further divided into a maximum of three gnoles or hierarchies. Every level
1 13 1 13
に、キャリア変調方式や内符号の符号化率、時間インターリーブ長などの異なる伝送 特性を設定することができ、また、階層毎に、異なる放送番組を送信することが可能 である。たとえば、 12個の OFDMセグメント S〜S , S〜S を用いて固定受信機向  In addition, different transmission characteristics such as carrier modulation scheme, inner code coding rate, and time interleave length can be set, and different broadcast programs can be transmitted for each layer. For example, 12 OFDM segments S to S, S to S
1 6 8 13  1 6 8 13
けの高品質のハイビジョン放送(High Definition Television)を提供したり、 12個の O FDMセグメント S〜S , S〜S を 3個の階層に分割して階層毎に固定受信機向け  High-definition television (High Definition Television) or 12 OFDM segments S to S and S to S divided into 3 layers for fixed receivers
1 6 8 13  1 6 8 13
の標準テレビ放送(Standard Definition Television)を提供したり、中央に配置された 1セグメント Sのみを用いて移動体向けデジタル放送 (簡易動画放送)を提供したりす ること力 Sできる。固定受信機向けデジタル放送では、画像圧縮方式として MPEG2 ( Moving Picture Experts Group phase 2)が採用され、移動体向けのデジタル放送で は、画像圧縮方式として H. 264 (MPEG4 AVC)が採用されている。  Standard Definition Television, and can provide digital broadcasting for mobiles (simple video broadcasting) using only one segment S located in the center. In digital broadcasting for fixed receivers, MPEG2 (Moving Picture Experts Group phase 2) is adopted as the image compression method, and in digital broadcasting for mobiles, H.264 (MPEG4 AVC) is adopted as the image compression method. .
[0003] 地上デジタル放送が本格的に開始される当初には、放送事業者が、同一内容の番 組を固定受信機向けデジタル放送と移動体向けデジタル放送との双方で同時に提 供するという、いわゆるサイマルキャスト(サイマル放送)が予定されている。そこで、 車両などの移動体において、サイマルキャストを利用して極力高品質のデジタル放 送を受信する技術力^、くつか提案されている。たとえば、特許文献 1 (特開 2004— 3 12361号公報)および特許文献 2 (特開 2004— 166173号公報)では、受信品質に 応じて、移動体向けのデジタル放送と固定受信機向けデジタル放送との間で受信放 送を自動的に切り換える受信装置が開示されている。 [0003] At the beginning of full-scale digital terrestrial broadcasting, so-called broadcasters provide programs with the same content on both fixed receiver digital broadcasts and mobile digital broadcasts at the same time. A simulcast (simal broadcast) is planned. For this reason, high-quality digital broadcasting is used as much as possible in mobile objects such as vehicles by using simulcast. Some technical skills to receive and send messages have been proposed. For example, in Patent Document 1 (Japanese Unexamined Patent Application Publication No. 2004-312361) and Patent Document 2 (Japanese Unexamined Patent Application Publication No. 2004-166173), depending on reception quality, digital broadcasts for mobile units and digital broadcasts for fixed receivers Has been disclosed which automatically switches between receiving and broadcasting.
[0004] し力 ながら、前記特許文献 1, 2の受信装置では、移動体向けのデジタル放送と 固定受信機向けのデジタル放送との一方から他方へ受信放送が切り換えられたとき に、画像フレームの画質が突然変化するので、不自然な映像が表示され、放送番組 を試聴する者に違和感を与えるという問題がある。  [0004] However, in the receiving apparatuses of Patent Documents 1 and 2, when the received broadcast is switched from one of the digital broadcast for a mobile unit and the digital broadcast for a fixed receiver to the other, Since the picture quality changes suddenly, an unnatural image is displayed, and there is a problem in that it gives an uncomfortable feeling to those who listen to the broadcast program.
[0005] また、移動体向けのデジタル放送と固定受信機向けのデジタル放送とがサイマル キャストで提供されても、両放送の内容が時間的に正確に同期しているとは限らない 。かかる場合、受信放送が移動体向けのデジタル放送力 固定受信機向けのデジタ ル放送に切り換えられたときに、不自然な映像が表示され、放送番組を視聴する者 に違和感を与え得る。  [0005] Also, even if a digital broadcast for a mobile unit and a digital broadcast for a fixed receiver are provided by simulcast, the contents of both broadcasts are not always accurately synchronized in time. In such a case, when the received broadcast is switched to a digital broadcast for a mobile unit and a digital broadcast for a fixed receiver, an unnatural image is displayed, which may give the viewer a sense of discomfort.
特許文献 1 :特開 2004— 312361号公報  Patent Document 1: Japanese Unexamined Patent Application Publication No. 2004-312361
特許文献 2 :特開 2004— 166173号公報  Patent Document 2: JP 2004-166173 A
発明の開示  Disclosure of the invention
[0006] 上記に鑑みて本発明の目的は、サイマルキャストで提供されている複数のデジタル 放送の受信信号から極力高品質且つ極力自然な表示映像を生成し得るデジタル放 送受信用の信号処理装置、信号処理方法および信号処理プログラム並びにデジタ ル放送受信装置を提供することである。  In view of the above, an object of the present invention is to provide a signal processing device for digital transmission / reception capable of generating a display image as high as possible and as natural as possible from a plurality of digital broadcast reception signals provided by simulcast, A signal processing method, a signal processing program, and a digital broadcast receiving apparatus are provided.
[0007] 本発明の一態様による信号処理装置は、サイマルキャストで送信された第 1のデジ タル放送および第 2のデジタル放送の双方の受信信号を処理するデジタル放送受 信用の信号処理装置である。この信号処理装置は、第 1の符号化規格に従って前記 第 1のデジタル放送の受信信号を復号化して第 1の復号画像信号を生成する第 1デ コーダと、前記第 1の符号化規格とは異なる第 2の符号化規格に従って前記第 2のデ ジタル放送の受信信号を復号化して第 2の復号画像信号を生成する第 2デコーダと 、前記第 1の復号画像信号と前記第 2の復号画像信号とに基づいて出力画像信号を 生成しこれを出力する信号出力部と、を備え、前記第 1デコーダは、前記受信信号の 復号ィ匕過程で所定数の画素からなる画素ブロック単位でエラーを検出し、前記信号 出力部は、前記エラーが検出されないときは、前記第 1の復号画像信号の画素プロ ックを出力する一方、前記エラーが検出されたときは、当該エラーが検出された画素 ブロックに代えて、前記第 2の復号画像信号中の対応する画素ブロックを出力するこ とにより前記出力画像信号を生成するものである。 [0007] A signal processing apparatus according to an aspect of the present invention is a digital broadcast reception signal processing apparatus that processes reception signals of both the first digital broadcast and the second digital broadcast transmitted by simulcast. . The signal processing apparatus includes: a first decoder that decodes a received signal of the first digital broadcast according to a first encoding standard to generate a first decoded image signal; and the first encoding standard. A second decoder that decodes a received signal of the second digital broadcast according to a different second encoding standard to generate a second decoded image signal; the first decoded image signal; and the second decoded image And a signal output unit that generates an output image signal based on the signal and outputs the output image signal, and the first decoder In the decoding process, an error is detected in units of a pixel block including a predetermined number of pixels, and the signal output unit outputs a pixel block of the first decoded image signal when the error is not detected. When the error is detected, the output image signal is generated by outputting the corresponding pixel block in the second decoded image signal instead of the pixel block in which the error is detected. is there.
[0008] 本発明の一態様によるデジタル放送受信装置は、サイマルキャストで送信された第 1のデジタル放送および第 2のデジタル放送を受信するデジタル放送受信装置であ る。このデジタル放送受信装置は、前記第 1および第 2のデジタル放送の受信信号を 供給する受信回路と、第 1の符号化規格に従って前記第 1のデジタル放送の受信信 号を復号化して第 1の復号画像信号を生成する第 1デコーダと、前記第 1の符号化 規格とは異なる第 2の符号化規格に従って前記第 2のデジタル放送の受信信号を復 号化して第 2の復号画像信号を生成する第 2デコーダと、前記第 1の復号画像信号と 前記第 2の復号画像信号とに基づいて出力画像信号を生成しこれを出力する信号 出力部と、を備え、前記第 1デコーダは、前記受信信号の復号化過程で所定数の画 素からなる画素ブロック単位でエラーを検出し、前記信号出力部は、前記エラーが検 出されないときは、前記第 1の復号画像信号の画素ブロックを出力する一方、前記ェ ラーが検出されたときは、当該エラーが検出された画素ブロックに代えて、前記第 2 の復号画像信号中の対応する画素ブロックを出力することにより前記出力画像信号 を生成するものである。 [0008] A digital broadcast receiving apparatus according to an aspect of the present invention is a digital broadcast receiving apparatus that receives a first digital broadcast and a second digital broadcast transmitted by simulcast. The digital broadcast receiving apparatus includes: a reception circuit that supplies reception signals of the first and second digital broadcasts; and a reception circuit that decodes the reception signals of the first digital broadcast according to a first encoding standard. A first decoder that generates a decoded image signal and a second decoded image signal by decoding the received signal of the second digital broadcast according to a second encoding standard different from the first encoding standard And a signal output unit that generates an output image signal based on the first decoded image signal and the second decoded image signal and outputs the output image signal, and the first decoder In the decoding process of the received signal, an error is detected in units of pixel blocks including a predetermined number of pixels, and the signal output unit outputs the pixel block of the first decoded image signal when the error is not detected. While the above error When There is detected, instead of the pixel block in which the error is detected, and generates the output image signal by outputting a corresponding pixel block in the second decoded image signal.
[0009] 本発明の一態様による信号処理方法は、サイマルキャストで送信された第 1のデジ タル放送および第 2のデジタル放送の双方の受信信号を処理するデジタル放送受 信用の信号処理方法である。この信号処理方法は、(a)第 1の符号ィヒ規格に従って 前記第 1のデジタル放送の受信信号を復号化して第 1の復号画像信号を生成するス テツプと、 (b)前記第 1の符号化規格とは異なる第 2の符号化規格に従って前記第 2 のデジタル放送の受信信号を復号ィヒして第 2の復号画像信号を生成するステップと 、 (c)前記ステップ (a)の復号化過程で所定数の画素からなる画素ブロック単位でェ ラーを検出するステップと、(d)前記エラーが検出されないときは、前記第 1の復号画 像信号の画素ブロックを出力する一方、前記エラーが検出されたときは、当該エラー が検出された画素ブロックに代えて、前記第 2の復号画像信号中の対応する画素ブ ロックを出力することにより出力画像信号を生成するステップと、を備えたものである。 [0009] A signal processing method according to an aspect of the present invention is a digital broadcast reception signal processing method for processing received signals of both the first digital broadcast and the second digital broadcast transmitted by simulcast. . The signal processing method includes: (a) a step of decoding a reception signal of the first digital broadcast according to a first coding standard to generate a first decoded image signal; and (b) the first decoding image signal. (C) decoding the received signal of the second digital broadcast according to a second encoding standard different from the encoding standard to generate a second decoded image signal; Detecting an error in a pixel block unit composed of a predetermined number of pixels in the conversion process, and (d) outputting the pixel block of the first decoded image signal while detecting the error when the error is not detected. Is detected, the error Generating an output image signal by outputting a corresponding pixel block in the second decoded image signal, instead of the pixel block in which is detected.
[0010] 本発明の一態様による信号処理プログラムは、サイマルキャストで送信された第 1の デジタル放送および第 2のデジタル放送の双方の受信信号の処理をプロセッサに実 行させるデジタル放送受信用の信号処理プログラムである。この信号処理プログラム は、前記処理は、(a)第 1の符号ィ匕規格に従って前記第 1のデジタル放送の受信信 号を復号化して第 1の復号画像信号を生成する第 1複号化処理と、(b)前記第 1の符 号ィ匕規格とは異なる第 2の符号化規格に従って前記第 2のデジタル放送の受信信号 を復号化して第 2の復号画像信号を生成する第 2復号化処理と、 (c)前記第 2復号化 処理の過程で所定数の画素からなる画素ブロック単位でエラーを検出するエラー検 出処理と、(d)前記エラーが検出されないときは、前記第 1の復号画像信号の画素ブ ロックを出力する一方、前記エラーが検出されたときは、当該エラーが検出された画 素ブロックに代えて、前記第 2の復号画像信号中の対応する画素ブロックを出力する ことにより出力画像信号を生成する信号出力処理と、を含むものである。 [0010] A signal processing program according to an aspect of the present invention is a signal for digital broadcast reception that causes a processor to execute processing of received signals of both the first digital broadcast and the second digital broadcast transmitted by simulcast. It is a processing program. In this signal processing program, the process is (a) a first decoding process for generating a first decoded image signal by decoding a received signal of the first digital broadcast in accordance with a first code standard. And (b) a second decoding that decodes the received signal of the second digital broadcast according to a second encoding standard different from the first encoding standard and generates a second decoded image signal. Processing, (c) error detection processing for detecting an error in a pixel block unit composed of a predetermined number of pixels in the course of the second decoding processing, and (d) when the error is not detected, the first detection While outputting the pixel block of the decoded image signal, when the error is detected, the corresponding pixel block in the second decoded image signal is output instead of the pixel block in which the error is detected. Output processing to generate output image signal And.
図面の簡単な説明  Brief Description of Drawings
[0011] [図 1]図 1は、 13個の OFDMセグメントを示す図である。  FIG. 1 is a diagram showing 13 OFDM segments.
[図 2]図 2は、本発明に係る実施例のデジタル放送受信装置の構成を概略的に示す 機能ブロック図である。  FIG. 2 is a functional block diagram schematically showing a configuration of a digital broadcast receiving apparatus according to an embodiment of the present invention.
[図 3]図 3は、遅延検出部の概略構成の一例を示す機能ブロック図である。  FIG. 3 is a functional block diagram showing an example of a schematic configuration of a delay detection unit.
[図 4]図 4は、遅延検出部の概略構成の他の例を示す機能ブロック図である。  FIG. 4 is a functional block diagram showing another example of the schematic configuration of the delay detection unit.
[図 5]図 5は、遅延量検出処理を説明するためのフローチャートである。  FIG. 5 is a flowchart for explaining a delay amount detection process.
[図 6]図 6は、遅延量検出処理を説明するためのフローチャートである。  FIG. 6 is a flowchart for explaining a delay amount detection process.
[図 7]図 7は、遅延量検出処理を説明するための図である。  FIG. 7 is a diagram for explaining a delay amount detection process.
[図 8]図 8は、信号出力部の概略構成を示す機能ブロック図である。  FIG. 8 is a functional block diagram showing a schematic configuration of a signal output unit.
[図 9]図 9は、出力画像を模式的に示す図である。  FIG. 9 is a diagram schematically showing an output image.
[図 10]図 10は、動き検出処理を説明するためのフローチャートである。  FIG. 10 is a flowchart for explaining motion detection processing.
符号の説明  Explanation of symbols
[0012] 1 デジタル放送受信装置 2 受信回路(フロントエンド) [0012] 1 Digital broadcast receiver 2 Receiver circuit (front end)
3 信号処理回路  3 Signal processing circuit
10 アンアナ  10 Anana
11 チューナ  11 Tuner
12 復調回路  12 Demodulator circuit
13 第 1デコーダ  13 First decoder
14 第 1遅延部  14 First delay section
15 第 2デコーダ  15 Second decoder
16 第 2遅延部  16 Second delay section
17 遅延検出部  17 Delay detector
18 信号出力部  18 Signal output section
19 動き検出部  19 Motion detector
発明を実施するための形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0013] 本出願は、 日本国特許出願第 2005— 335651号を優先権主張の基礎とするもの であり、当該基礎出願の内容は本願に援用されるものとする。  [0013] This application is based on Japanese Patent Application No. 2005-335651 for priority claim, and the contents of the basic application are incorporated herein by reference.
[0014] 以下、本発明に係る種々の実施例について説明する。 [0014] Various embodiments according to the present invention will be described below.
[0015] 図 2は、本発明に係る実施例のデジタル放送受信装置 1の構成を概略的に示す機 能ブロック図である。このデジタル放送受信装置 1は、アンテナ 10と受信回路 (フロン トエンド) 2と信号処理回路 3とで構成されている。受信回路 2は、チューナ 11と復調 回路 12とを有する。信号処理回路 3は、デマルチプレキサ(DMUX)、第 1デコーダ 13、第 2デコーダ 15、遅延部 14, 15、遅延検出部 17および信号出力部 18を有して いる。  FIG. 2 is a functional block diagram schematically showing the configuration of the digital broadcast receiver 1 according to the embodiment of the present invention. This digital broadcast receiving apparatus 1 includes an antenna 10, a receiving circuit (front end) 2, and a signal processing circuit 3. The reception circuit 2 includes a tuner 11 and a demodulation circuit 12. The signal processing circuit 3 includes a demultiplexer (DMUX), a first decoder 13, a second decoder 15, delay units 14 and 15, a delay detection unit 17, and a signal output unit 18.
[0016] 図 2を参照すると、チューナ 11は、アンテナ 10で受信すべき放送波を選局し、受信 した放送波を周波数変換して OFDM信号を生成する。復調回路 12は、チューナ 11 力 の OFDM信号に FFT (高速フーリエ変換)を施して OFDM復調信号を生成し、 さらにこの OFDM復調信号に対してディンターリーブ、デマッピングおよび誤り訂正 などの復号化処理を施して、トランスポートストリーム TSを生成する。デマルチプレキ サ(DMUX)は、トランスポートストリーム TSから、第 1エレメンタリストリーム ES1,第 2 エレメンタリストリーム ES2および TSエラー情報 TSeを分離し、第 1エレメンタリストリ ーム ES1を第 1デコーダ 13に供給し、第 2エレメンタリストリーム ES2を第 2デコーダ 1 5に供給する。第 1エレメンタリストリーム ES1は MPEG2規格に準拠したビット列であ り、第 2エレメンタリストリーム ES2は H. 264規格に準拠したビット列である。 Referring to FIG. 2, tuner 11 selects a broadcast wave to be received by antenna 10 and frequency-converts the received broadcast wave to generate an OFDM signal. The demodulator circuit 12 generates an OFDM demodulated signal by performing FFT (Fast Fourier Transform) on the OFDM signal of the tuner 11 power, and further performs decoding processing such as deinterleaving, demapping and error correction on the OFDM demodulated signal. To generate a transport stream TS. The demultiplexer (DMUX) starts with the first elementary stream ES1 and second from the transport stream TS. The elementary stream ES2 and the TS error information TSe are separated, the first elementary stream ES1 is supplied to the first decoder 13, and the second elementary stream ES2 is supplied to the second decoder 15. The first elementary stream ES1 is a bit string compliant with the MPEG2 standard, and the second elementary stream ES2 is a bit string compliant with the H.264 standard.
[0017] また、復調回路 12は、誤り訂正処理の過程で、訂正不可能なエラーを TSパケット 単位で検出し、このエラーの発生位置を示す TSエラー情報 TSeをトランスポートスト リーム TSに含めてデマルチプレキサ(DMUX)に供給する。  [0017] Further, the demodulation circuit 12 detects an uncorrectable error in units of TS packets in the course of error correction processing, and includes TS error information TSe indicating the position where this error occurs in the transport stream TS. Supply to demultiplexer (DMUX).
[0018] 第 1デコーダ 13は、第 1エレメンタリストリーム ES1を MPEG2の符号化規格に従つ て復号化して復号画像信号 DS1を生成する。ここで、第 1デコーダ 13は、 TSエラー 情報 TSeを用いてマクロブロックエラー情報 MBeを生成し、この情報 MBeを信号出 力部 18に供給する。マクロブロックエラー情報 MBeは、各マクロブロックについてェ ラーの有無を示す情報である。マクロブロックは、通常、 16 X 16個または 8 X 8個の 画素で構成されている。本発明の画素ブロックは、マクロブロック自体、或いはマクロ ブロックをさらに分割して得られるサブブロックでもよい。第 1デコーダ 13は、スキップ マクロブロック(skipped macroblock)の有無を示すデータや動きベクトルを含む動き 量情報 MV1を動き検出部 19に供給している。  [0018] The first decoder 13 decodes the first elementary stream ES1 according to the MPEG2 coding standard to generate a decoded image signal DS1. Here, the first decoder 13 generates macroblock error information MBe using the TS error information TSe, and supplies this information MBe to the signal output unit 18. Macroblock error information MBe is information indicating the presence or absence of an error for each macroblock. A macroblock usually consists of 16 x 16 or 8 x 8 pixels. The pixel block of the present invention may be a macroblock itself or a subblock obtained by further dividing the macroblock. The first decoder 13 supplies the motion detection unit 19 with motion amount information MV1 including data indicating the presence or absence of skipped macroblocks and motion vectors.
[0019] 他方、第 2デコーダ 15は、第 2エレメンタリストリーム ES2を H. 264の符号化規格に 従って復号化して復号画像信号 DS2を生成する。また、第 2デコーダ 15は、スキップ マクロブロックの有無を示すデータや動きベクトルを含む動き量情報 MV2を動き検 出部 19に供給している。  On the other hand, the second decoder 15 decodes the second elementary stream ES2 in accordance with the H.264 coding standard to generate a decoded image signal DS2. Further, the second decoder 15 supplies motion amount information MV2 including data indicating the presence / absence of a skip macroblock and a motion vector to the motion detection unit 19.
[0020] さらに、第 1デコーダ 13と第 2デコーダ 15は、それぞれ、復号画像信号 DS1と復号 画像信号 DS2のフレームレート、画像サイズ (水平解像度および垂直解像度)、およ びパンスキャン (PANSCAN)の指定フラグなどの画像パラメータ PA1 , PA2を、遅延 検出部 17と信号出力部 18とに供給している。  [0020] Further, the first decoder 13 and the second decoder 15 respectively have a frame rate, an image size (horizontal resolution and vertical resolution), and a pan scan (PANSCAN) of the decoded image signal DS1 and the decoded image signal DS2. Image parameters PA1 and PA2 such as a designation flag are supplied to the delay detection unit 17 and the signal output unit 18.
[0021] 第 1遅延部 14は、遅延検出部 17から供給される第 1遅延量 DV1だけ、第 1デコー ダ 13からの復号画像信号 DS1を遅延させて遅延画像信号 LSIを生成し、この信号 LSIを信号出力部 18に供給する。他方、第 2遅延部 16は、遅延検出部 17から供給 される第 1遅延量 DV2だけ、第 2デコーダ 15からの復号画像信号 DS2を遅延させて 遅延画像信号 LS2を生成し、この信号 LS2を信号出力部 18に供給する。信号出力 部 18は、マクロブロックエラー情報 MBeに応じて、遅延画像信号 LSIと遅延画像信 号 LS2とに基づいて出力画像信号 CSを生成する。この信号出力部 18の構成につ いては後述する。 The first delay unit 14 delays the decoded image signal DS1 from the first decoder 13 by the first delay amount DV1 supplied from the delay detection unit 17, and generates a delayed image signal LSI. The LSI is supplied to the signal output unit 18. On the other hand, the second delay unit 16 delays the decoded image signal DS2 from the second decoder 15 by the first delay amount DV2 supplied from the delay detection unit 17. A delayed image signal LS2 is generated, and this signal LS2 is supplied to the signal output unit 18. The signal output unit 18 generates an output image signal CS based on the delayed image signal LSI and the delayed image signal LS2 in accordance with the macroblock error information MBe. The configuration of the signal output unit 18 will be described later.
[0022] 動き検出部 19は、復号画像信号 DS1が表す画像(以下、 MPEG画像と呼ぶ。)の 動き量と、復号画像信号 DS2が表す画像(以下、 H. 264画像と呼ぶ。)の動き量とを 検出し、その検出結果 MDを遅延検出部 17に供給する。遅延検出部 17は、その検 出結果 MDを用いて、復号画像信号 DS1と復号画像信号 DS2との間の同期ずれを 検出し、その同期ずれを補償するための第 1遅延量 DV1と第 2遅延量 DV2とを生成 する。これら動き検出部 19と遅延検出部 17の各構成にっレ、ては後述する。  [0022] The motion detector 19 moves the amount of motion of the image (hereinafter referred to as MPEG image) represented by the decoded image signal DS1 and the motion of the image (hereinafter referred to as H.264 image) represented by the decoded image signal DS2. The detection result MD is supplied to the delay detection unit 17. The delay detection unit 17 uses the detection result MD to detect a synchronization shift between the decoded image signal DS1 and the decoded image signal DS2, and compensates for the synchronization shift by using the first delay amount DV1 and the second delay amount. Delay amount DV2 is generated. Each configuration of the motion detector 19 and the delay detector 17 will be described later.
[0023] 図 3は、遅延検出部 17の概略構成の一例を示す機能ブロック図である。図 3を参照 すると、遅延検出部 17は、画像サイズ検出部 20,ダウンコンバータ (解像度変換部) 21,第 1画像抽出部 22,第 2画像抽出部 23,—致検出部 24,フレームレート検出部 25および遅延量テーブル 26を有している。  FIG. 3 is a functional block diagram illustrating an example of a schematic configuration of the delay detection unit 17. Referring to FIG. 3, the delay detection unit 17 includes an image size detection unit 20, a down converter (resolution conversion unit) 21, a first image extraction unit 22, a second image extraction unit 23, a match detection unit 24, and a frame rate detection. Part 25 and a delay amount table 26.
[0024] 画像サイズ検出部 20は、画像パラメータ PA1 , PA2から MPEG画像の画像サイズ  [0024] The image size detection unit 20 calculates the image size of the MPEG image from the image parameters PA1 and PA2.
(垂直解像度および水平解像度)と H. 264画像の画像サイズとを検出し、これら画像 サイズの比率である変換比率 DRをダウンコンバータ 21に与える。ダウンコンバータ 2 1は、変換比率 DRの倍率で復号画像信号 DS1の解像度を変換して、すなわち復号 画像信号 DS1を間引いて変換画像信号 DC1を生成する。この結果、変換画像信号 DC1の画像サイズは復号画像信号 DS2のそれに一致する。たとえば、 MPEG画像 の高角率像度 1920 X 1080力 H. 264画像の角军像度 320 X 180に変換される。  (Vertical resolution and horizontal resolution) and the image size of the H.264 image are detected, and a conversion ratio DR which is a ratio of these image sizes is supplied to the down converter 21. The down-converter 21 converts the resolution of the decoded image signal DS1 at a conversion ratio DR, that is, thins out the decoded image signal DS1 to generate a converted image signal DC1. As a result, the image size of the converted image signal DC1 matches that of the decoded image signal DS2. For example, MPEG image is converted to high angle rate image 1920 × 1080 power H.264 image angle image 320 × 180.
[0025] また、画像サイズ検出部 20は、画像パラメータ PA1 , PA2力ら、 MPEG画像のァス ぺクト比と H. 264画像のアスペクト比とを示す情報 ARを取得して、この情報 ARを一 致検出部 24に供給する。  In addition, the image size detection unit 20 acquires information AR indicating the aspect ratio of the MPEG image and the aspect ratio of the H.264 image, such as the image parameters PA1 and PA2, and the information AR. Supplied to match detection unit 24.
[0026] 一方、フレームレート検出部 25は、画像パラメータ PA1, PA2から MPEG画像のフ レームレート( = fsl)と H. 264画像のフレームレート( = fs2)とを取得し、 H. 264画 像のフレームレートに対する MPEG画像のフレームレートの比率( = fslZfs2)であ るフレームレート比 FRを算出する。このフレームレート比 FRのデータは、第 1画像抽 出部 22および一致検出部 24にそれぞれ供給される。たとえば、 MPEG画像のフレ ームレートが 60fps (フレーム毎秒)または 30fpsであり、 H. 264画像のフレームレー トが 15fpsである場合、フレームレート比 FRは、 4または 2の整数となる。 On the other hand, the frame rate detection unit 25 obtains the frame rate (= fsl) of the MPEG image and the frame rate (= fs2) of the H.264 image from the image parameters PA1 and PA2, and obtains the H.264 image. The frame rate ratio FR, which is the ratio of the frame rate of the MPEG image to the frame rate (= fslZfs2), is calculated. The data of this frame rate ratio FR It is supplied to the outlet 22 and the coincidence detector 24, respectively. For example, if the frame rate of an MPEG image is 60 fps (frames per second) or 30 fps, and the frame rate of an H.264 image is 15 fps, the frame rate ratio FR is an integer of 4 or 2.
[0027] 第 1画像抽出部 22は、変換画像信号 DC1を構成する一連の画像フレームの中か ら、フレームレート比 FRに応じた数の基準フレームを抽出し、単数または複数の基準 フレーム SD1を一致検出部 24に供給する。他方、第 2画像抽出部 23は、遅延量テ 一ブル 26から供給された遅延情報 TBを用いて、復号画像信号 DS1を構成する一 連の画像フレームの中力ら複数の基準フレームを抽出し、これら基準フレーム SD2を 一致検出部 24に与える。  [0027] The first image extraction unit 22 extracts a number of reference frames corresponding to the frame rate ratio FR from a series of image frames constituting the converted image signal DC1, and extracts one or more reference frames SD1. Supplied to the coincidence detector 24. On the other hand, the second image extraction unit 23 uses the delay information TB supplied from the delay amount table 26 to extract a plurality of reference frames from the middle force of a series of image frames constituting the decoded image signal DS1. These reference frames SD2 are given to the coincidence detection unit 24.
[0028] ところで、同一チャネルを使用してサイマルキャストされる固定受信機向けのデジタ ル放送と移動体向けのデジタル放送とは時間的に互いに同期しているとは限らず、 両放送間の同期ずれは、デジタル放送を送信する放送局のシステム毎に異なり、当 該システムに固有のものである。遅延量テーブル 26には、そのような各放送局で発 生する同期ずれに相当する遅延時間のデータが格納されている。また、遅延量テー ブル 26には、現在選局されているチャネルを示す受信チャネル情報 CHが制御回路 (図示せず)から供給されており、遅延量テーブル 26は、受信チャネル情報 CHに対 応する遅延時間のデータ TBを第 2画像抽出部 23に供給している。  [0028] By the way, digital broadcasting for fixed receivers that are simulcast using the same channel and digital broadcasting for mobile units are not necessarily synchronized in time with each other. The deviation differs depending on the system of the broadcasting station that transmits the digital broadcast, and is specific to the system. The delay amount table 26 stores data of delay times corresponding to such synchronization loss occurring at each broadcasting station. The delay amount table 26 is supplied with a reception channel information CH indicating a currently selected channel from a control circuit (not shown). The delay amount table 26 corresponds to the reception channel information CH. The data TB of the delay time is supplied to the second image extraction unit 23.
[0029] 一致検出部 24は、基準フレーム SD1 , SD2を比較してこれら基準フレーム SD1 , SD2間の情報量の差が最小になる基準フレームの組み合わせを検索する。この検 索の結果得られた 2枚の基準フレームが表す画像は略一致すると判断される。たとえ ば、基準フレーム SD1の数が 2枚であり、基準フレーム SD2の数が 3枚であれば、 2 X 3 ( = 6)通りの組み合わせが存在するので、これら 6通りの組み合わせの中力 、 基準フレーム間の情報量の差が最小となる組み合わせを検索することとなる。また、 一致検出部 24は、検索の結果得られた基準フレーム間の時間差に基づいて遅延量 DV1, DV2を算出する。一致検出部 24は、動き検出部 19による検出結果 MDに応 じて、 MPEG画像と H. 264画像の双方に動きがあるか否かを判断することができる 。一致検出部 24は、 MPEG画像と H. 264画像の双方に動きが殆ど無い場合には、 基準フレームの一致精度が低くなるので検索処理を中断する機能を有する。 [0030] 図 4は、遅延検出部 17の概略構成の他の例を示す機能ブロック図である。図 3に示 した遅延検出部 17では、ダウンコンバータ 21によって MPEG画像の解像度が H. 2 64画像のそれに変換された力 図 4に示す遅延検出部 17では、アップコンバータ 27 によって H. 264画像の解像度が MPEG画像のそれに変換される。このため、第 1画 像抽出部 22Aは、復号画像信号 DS1を構成する一連の画像フレームの中から基準 フレーム SD1を抽出し、第 2画像抽出部 23Bは、アップコンバータ 27から供給される 変換画像信号 DC1を構成する一連の画像フレームの中から基準フレーム SD2を抽 出している。また、一致検出部 24Aは、 MPEG画像の高解像度を持つ基準フレーム SD1, SD2を用いて検索処理を実行する。その他の構成は、図 3に示した遅延検出 部 17の構成と同じである。 The coincidence detection unit 24 compares the reference frames SD1 and SD2, and searches for a combination of reference frames that minimizes the information amount difference between the reference frames SD1 and SD2. The images represented by the two reference frames obtained as a result of this search are judged to be nearly identical. For example, if the number of reference frames SD1 is 2 and the number of reference frames SD2 is 3, there are 2 X 3 (= 6) combinations. A combination that minimizes the difference in the amount of information between the reference frames is searched. Further, the coincidence detection unit 24 calculates delay amounts DV1 and DV2 based on the time difference between the reference frames obtained as a result of the search. The coincidence detection unit 24 can determine whether or not there is a motion in both the MPEG image and the H.264 image according to the detection result MD by the motion detection unit 19. The coincidence detection unit 24 has a function of interrupting the search process because the coincidence accuracy of the reference frame is low when there is almost no movement in both the MPEG image and the H.264 image. FIG. 4 is a functional block diagram showing another example of the schematic configuration of the delay detection unit 17. In the delay detection unit 17 shown in FIG. 3, the resolution of the MPEG image is converted to that of the H.264 image by the downconverter 21. In the delay detection unit 17 shown in FIG. The resolution is converted to that of an MPEG image. For this reason, the first image extraction unit 22A extracts the reference frame SD1 from the series of image frames constituting the decoded image signal DS1, and the second image extraction unit 23B converts the converted image supplied from the upconverter 27. The reference frame SD2 is extracted from the series of image frames that make up the signal DC1. In addition, the coincidence detection unit 24A executes search processing using the reference frames SD1 and SD2 having high resolution of the MPEG image. The other configuration is the same as the configuration of the delay detection unit 17 shown in FIG.
[0031] 次に、主に遅延検出部 17 (図 3)による遅延量検出処理について、図 5および図 6 のフローチャートを参照しつつ以下に説明する。図 5と図 6のフローチャートは、接続 子 CI, C2を介して相互に接続されている。  Next, the delay amount detection processing mainly by the delay detection unit 17 (FIG. 3) will be described below with reference to the flowcharts of FIGS. 5 and 6. The flowcharts of Fig. 5 and Fig. 6 are connected to each other via connectors CI and C2.
[0032] 図 5を参照すると、ステップ S1では、制御回路(図示せず)がサイマル放送が受信さ れているか否かを判定し、サイマル放送が受信されていなければ、遅延量検出処理 を終了させる。たとえば、制御回路(図示せず)が復調回路 12で分離されたデータ用 のトランスポートストリームを復号化して EPG (電子番組表)情報を取得し、この EPG 情報に基づいてサイマル放送を受信しているか否かを判定することができる。サイマ ル放送が受信されている場合、画像サイズ検出部 20は、画像パラメータ PA1 , PA2 力も MPEG画像の画像サイズと H. 264画像の画像サイズとを取得し、これら画像サ ィズの比率を算出する (ステップ S2)。画像サイズ検出部 20は、この画像サイズの比 率である変換比率 DRをダウンコンバータ 21に供給し、ダウンコンバータ 21は、変換 比率 DRに応じた倍率で MPEG画像 (復号画像信号 DS1)の解像度を H. 264画像 の解像度に変換して変換画像信号 DC1を生成する (ステップ S3)。  Referring to FIG. 5, in step S1, a control circuit (not shown) determines whether or not simulcast is received. If simulcast is not received, the delay amount detection process is terminated. Let For example, a control circuit (not shown) decodes the transport stream for data separated by the demodulation circuit 12 to obtain EPG (electronic program guide) information, and receives simulcast based on this EPG information. It can be determined whether or not. When a simulcast is received, the image size detection unit 20 also obtains the image size of the MPEG image and the image size of the H.264 image for the image parameters PA1 and PA2, and calculates the ratio of these image sizes. (Step S2). The image size detection unit 20 supplies the conversion ratio DR, which is a ratio of the image size, to the down converter 21, and the down converter 21 converts the resolution of the MPEG image (decoded image signal DS1) at a magnification according to the conversion ratio DR. The converted image signal DC1 is generated by converting to the resolution of H.264 image (step S3).
[0033] さらに、画像サイズ検出部 20は、画像パラメータ PA1, PA2から MPEG画像のァス ぺクト比と H. 264画像のアスペクト比とを取得し (ステップ S4)、これらアスペクト比 A Rのデータを一致検出部 24に与える。一致検出部 24は、これらアスペクト比が同一 か否かを判定し (ステップ S5)、アスペクト比が同一でなければ、遅延量検出処理を 終了させる。 [0033] Further, the image size detection unit 20 acquires the aspect ratio of the MPEG image and the aspect ratio of the H.264 image from the image parameters PA1 and PA2 (step S4), and the data of the aspect ratio AR is obtained. It is given to the coincidence detection unit 24. The coincidence detection unit 24 determines whether these aspect ratios are the same (step S5). If the aspect ratios are not the same, the delay amount detection process is performed. Terminate.
[0034] MPEG画像のアスペクト比と H. 264画像のアスペクト比とが同一であると判定され たとき(ステップ S5)、フレームレート検出部 25は、画像パラメータ PA1 , PA2から M PEG画像のフレームレートと H. 264画像のフレームレートとを検出する(ステップ S6 )。フレームレート検出部 25は、さらに、 H. 264画像のフレームレートが固定レートで あるか否かを判定する(ステップ S7)。 H. 264画像のフレームレートが可変レートで あれば、遅延量検出処理は終了する。  [0034] When it is determined that the aspect ratio of the MPEG image and the aspect ratio of the H.264 image are the same (step S5), the frame rate detection unit 25 calculates the frame rate of the MPEG image from the image parameters PA1 and PA2. And the frame rate of the H.264 image are detected (step S6). The frame rate detection unit 25 further determines whether or not the frame rate of the H.264 image is a fixed rate (step S7). If the frame rate of the H.264 image is variable, the delay amount detection process ends.
[0035] H. 264画像のフレームレートが固定レートであると判定したとき(ステップ S7)、フレ ームレート検出部 25は、 MPEG画像のフレームレートと H. 264画像のフレームレー トとの比率 FRを算出し (ステップ S8)、このフレームレート比 FRは、第 1画像抽出部 2 2と一致検出部 24とに供給される。一致検出部 24は、フレームレート比 FRが整数比 であるか否かを判定し (ステップ S9)、フレームレート比 FRが整数比でなければ、遅 延量検出処理を終了させる。  [0035] When it is determined that the frame rate of the H.264 image is a fixed rate (step S7), the frame rate detection unit 25 calculates the ratio FR between the frame rate of the MPEG image and the frame rate of the H.264 image. The frame rate ratio FR is calculated (step S8) and supplied to the first image extraction unit 22 and the coincidence detection unit 24. The coincidence detection unit 24 determines whether or not the frame rate ratio FR is an integer ratio (step S9). If the frame rate ratio FR is not an integer ratio, the delay amount detection process is terminated.
[0036] 続くステップ S10 (図 6)では、動き検出部 19 (図 2)は、第 1デコーダ 13から供給さ れた動き量情報 MV1または復号画像信号 DS1に基づいて、 MPEG画像の動き量 を検出する。次いで、動き検出部 19は、 MPEG画像の動き量が所定範囲外にある か否か、すなわち、遅延量を検出するために十分な動きが MPEG画像にあるか否か を判定する(ステップ Sl l)。 MPEG画像の動き量が所定範囲外に無いとき、動き検 出部 19はステップ S 10に処理を戻す。一方、 MPEG画像の動き量が所定範囲を超 えているとき、動き検出部 19は、さらに、第 2デコーダ 15から供給された動き量情報 MV2または復号画像信号 DS2に基づいて、 H. 264画像の動き量を検出する (ステ ップ S12)。次いで、動き検出部 19は、 H. 264画像の動き量が所定範囲外にあるか 否力 \すなわち、遅延量を検出するために十分な動きが H. 264画像にあるか否かを 判定する (ステップ S13)。動き検出部 19は、 H. 264画像の動き量が所定範囲外に 無いときはステップ S10に処理を戻し、 H. 264画像の動き量が所定範囲を超えてい るときには次のステップ S14に処理を移行させる。動き検出部 19による検出結果 MD は、一致検出部 24 (図 3)に供給される。  In the subsequent step S10 (FIG. 6), the motion detector 19 (FIG. 2) calculates the motion amount of the MPEG image based on the motion amount information MV1 or the decoded image signal DS1 supplied from the first decoder 13. To detect. Next, the motion detector 19 determines whether or not the motion amount of the MPEG image is outside the predetermined range, that is, whether or not there is sufficient motion in the MPEG image to detect the delay amount (step Sl l). ). When the motion amount of the MPEG image is not outside the predetermined range, the motion detection unit 19 returns the process to step S10. On the other hand, when the motion amount of the MPEG image exceeds the predetermined range, the motion detection unit 19 further determines the H.264 image based on the motion amount information MV2 or the decoded image signal DS2 supplied from the second decoder 15. The amount of movement is detected (step S12). Next, the motion detection unit 19 determines whether or not the motion amount of the H.264 image is outside the predetermined range. That is, the motion detection unit 19 determines whether or not the H.264 image has sufficient motion to detect the delay amount. (Step S13). The motion detection unit 19 returns the process to step S10 when the motion amount of the H.264 image is not outside the predetermined range, and performs the process to the next step S14 when the motion amount of the H.264 image exceeds the predetermined range. Transition. The detection result MD by the motion detector 19 is supplied to the coincidence detector 24 (FIG. 3).
[0037] このように動き検出部 19は、正確な遅延量を検出すベぐ MPEG画像と H. 264画 像の双方の動き量を監視して双方の動き量が一定範囲を超えた場合に次のステップ[0037] In this manner, the motion detection unit 19 detects MPEG images and H.264 images that should detect an accurate delay amount. If the amount of movement of both images exceeds a certain range, the next step
S14に処理を移行させている力 力かる場合に限定されるものではなレ、。 MPEG画 像と H. 264画像のいずれか一方のみの動き量を監視して当該一方の動き量が所定 範囲を超えた場合に次のステップ S14に処理を移行させることも可能である。 It is not limited to the case where it is possible to transfer the processing to S14. It is also possible to monitor the motion amount of only one of the MPEG image and the H.264 image and shift the processing to the next step S14 when the motion amount of the one exceeds a predetermined range.
[0038] 続くステップ S14では、第 1画像抽出部 22は、 MPEG画像から M枚(Mは正整数) の基準フレーム SD1を抽出し、これら基準フレーム SD1を一致検出部 24に与える。 基準フレーム SD1の枚数 Mは、 MPEG画像と H. 264画像とのフレームレート比 FR に基づいて決定される。たとえば、フレームレート比 FR力 S「2」の場合、 2 枚( は正 整数)の基準フレーム SD1を抽出することが可能である。  In the subsequent step S 14, the first image extraction unit 22 extracts M (M is a positive integer) reference frames SD 1 from the MPEG image, and provides these reference frames SD 1 to the coincidence detection unit 24. The number M of reference frames SD1 is determined based on the frame rate ratio FR between the MPEG image and the H.264 image. For example, when the frame rate ratio FR force S is “2”, it is possible to extract two (where is a positive integer) reference frames SD1.
[0039] 次に、第 2画像抽出部 23は、 H. 264画像から基準フレーム SD2を抽出するため に、選局されている放送局に固有の遅延量 Δ tを示す遅延情報 TBを遅延量テープ ル 26から取得する(ステップ S15)。続けて、第 2画像抽出部 23は、カレント時間 tか  [0039] Next, in order to extract the reference frame SD2 from the H.264 image, the second image extraction unit 23 uses the delay information TB indicating the delay amount Δt specific to the selected broadcast station as the delay amount. Obtained from table 26 (step S15). Subsequently, the second image extraction unit 23 determines whether the current time t
0 ら遅延量 A tだけ遅延させた時間 tを中心にした長さ Tを持つ時間窓(サンプリング窓  A time window (sampling window) having a length T centered around time t delayed by delay amount At from 0
1  1
)を設定する(ステップ S16)。図 7を参照しつつ時間窓について説明する。図 7には、 MPEG画像を構成する一連の画像フレーム MO, Ml , M2,…と、 H. 264画像を構 成する一連の画像フレーム HO, HI, H2,…とが示されている。これら MPEG画像と H. 264画像とは互いに同期していなレ、。また、カレント時間 t付近に、 MPEG画像  ) Is set (step S16). The time window will be described with reference to FIG. FIG. 7 shows a series of image frames MO, Ml, M2,... Constituting an MPEG image and a series of image frames HO, HI, H2,. These MPEG images and H.264 images are not synchronized with each other. In addition, the MPEG image
0  0
を構成する 2枚の基準フレーム M5, M6がサンプリングされている。このとき、長さ T の時間窓が、カレント時間 t力も遅延量 A tだけ遅延した時間 tを中心にして設定さ  The two reference frames M5 and M6 that make up are sampled. At this time, the time window of length T is set around the time t when the current time t force is also delayed by the delay amount At.
0 1  0 1
れることとなる。  Will be.
[0040] 次に、第 2画像抽出部 23は、時間窓に含まれる N枚 (Nは正整数)の画像フレーム を基準フレームとして抽出し、これら基準フレームを一致検出部 24に供給する (ステ ップ S17)。図 7の例では、長さ Tの時間窓に含まれる 3枚の基準フレーム HI , H2, H3力 由出されることとなる。  Next, the second image extraction unit 23 extracts N (N is a positive integer) image frames included in the time window as reference frames, and supplies these reference frames to the coincidence detection unit 24 (step). S17). In the example of FIG. 7, three reference frames HI, H2, and H3 included in the time window of length T are generated.
[0041] 次に、一致検出部 24は、第 1画像抽出部 22からの基準フレーム SD1と、第 2画像 抽出部 23からの基準フレーム SD2との差が最小になる組み合わせを検索する(ステ ップ S18)。図 7の例では、 MPEG画像の基準フレーム Mxと H. 264画像の基準フレ ーム Hxとの組み合わせ(Mx, Hx)は、 (M5, Hl)、(M5, H2)、(M5, H3)、 (M6 , Hl)、 (M6, H2)、(M6, H3)の 6通り存在する。これら組み合わせのうち、基準フ レーム間の情報量の差が最小になる組み合わせが検索される。具体的には、たとえ ば、各組み合わせについて基準フレーム間の輝度差を画素毎に算出し、これら輝度 差の総和を算出して、総和が最小になる組み合わせを検索すればよい。或いは、基 準フレーム間の相関が最大になる組み合わせを検索することも可能である。 Next, the coincidence detection unit 24 searches for a combination that minimizes the difference between the reference frame SD1 from the first image extraction unit 22 and the reference frame SD2 from the second image extraction unit 23 (step). S18). In the example of Fig. 7, the combination (Mx, Hx) of the reference frame Mx of the MPEG image and the reference frame Hx of the H.264 image is (M5, Hl), (M5, H2), (M5, H3) , (M6 , Hl), (M6, H2), and (M6, H3). Of these combinations, the combination that minimizes the difference in the amount of information between the reference frames is searched. Specifically, for example, the luminance difference between the reference frames for each combination may be calculated for each pixel, and the sum of these luminance differences may be calculated to search for a combination that minimizes the sum. Alternatively, it is possible to search for a combination that maximizes the correlation between the reference frames.
[0042] さらに、一致検出部 24は、検索が成功したか否かを判定する(ステップ S19)。たと えば、前記輝度差の総和が予め定めた閾値を超えているカ または前記相関が予め 定めた閾値未満であれば、基準フレーム間の一致度は低く検索に失敗したと判定す ることができる。検索が成功しないとき、一致検出部 24は、基準フレームの枚数 Nを 増やすために時間窓の長さ Tを拡大し (ステップ S21)、この後、ステップ S17に処理 を戻す。他方、検索が成功したとき、一致検出部 24は、検索の結果得た組み合わせ の 2枚の基準フレームが一致すると判定し、これら基準フレーム間の時間差に基づい て遅延量 DVl, DV2を算出する(ステップ S20)。図 7の例では、 2枚の基準フレーム M5, H2の組み合わせが検索された場合、基準フレーム M5, H2間の時間差が得ら れる。遅延量 DVl, DV2は、基準フレーム間の時間差を補償するように適宜設定さ れればよい。以上で、遅延算出処理は終了する。  Furthermore, the match detection unit 24 determines whether or not the search is successful (step S19). For example, if the sum of the luminance differences exceeds a predetermined threshold or if the correlation is less than a predetermined threshold, it is possible to determine that the search between the reference frames is low and the search has failed. . When the search is not successful, the coincidence detection unit 24 enlarges the time window length T in order to increase the number N of reference frames (step S21), and then returns to step S17. On the other hand, when the search is successful, the match detection unit 24 determines that the two reference frames of the combination obtained as a result of the search match, and calculates the delay amounts DVl and DV2 based on the time difference between these reference frames ( Step S20). In the example of FIG. 7, when a combination of two reference frames M5 and H2 is searched, a time difference between the reference frames M5 and H2 is obtained. The delay amounts DVl and DV2 may be set as appropriate so as to compensate for the time difference between the reference frames. Thus, the delay calculation process ends.
[0043] なお、一致検出部 24は、基準フレーム間の全画素について輝度差や相関を算出 することで基準フレーム間の差を得る処理を実行している力 この処理の代わりに、 一致精度を向上させるベぐ基準フレームの画像の上下端または左右端にある端部 画像領域を除いた中央領域内の画素のみについて輝度差や相関を算出してもよい  Note that the coincidence detection unit 24 performs a process of obtaining a difference between the reference frames by calculating a luminance difference and a correlation for all the pixels between the reference frames. The luminance difference or correlation may be calculated only for the pixels in the central area excluding the edge image areas at the upper and lower ends or the left and right edges of the image of the reference frame to be improved.
[0044] 上記のように算出された遅延量 DVl , DV2は、それぞれ、第 1遅延部 14と第 2遅 延部 16とに供給される。第 1遅延部 14は、 MPEG画像のタイムスタンプの値に第 1 遅延量 DV1を加算し、その加算値のタイミングで遅延画像信号 LSIを信号出力部 1 8に供給する。同様に、第 2遅延部 16は、 H. 264画像のタイムスタンプの値に第 2遅 延量 DV2を加算し、その加算値によるタイミングで遅延画像信号 LS2を信号出力部 18に供給することとなる。タイムスタンプとしては、 MPEG2規格と H. 264規格でそ れぞれ規定される PTS (Presentation Time Stamp)を使用できる。 [0045] 図 2に示したように、遅延量 DV1 , DV2に応じて、遅延部 14, 16はそれぞれ復号 画像信号 DS1, DS2を遅延させており、遅延部 14, 16はデコーダ 13, 15よりも後段 に配置されている力 力かる配置に限定する必要はない。たとえば、第 1デコーダ 13 よりも前段に遅延部 14を配置してエレメンタリストリーム ES1が復号化される時間を調 整してもよいし、第 2デコーダ 15よりも前段に遅延部 16を配置してエレメンタリストリー ム ES2が復号化される時間を調整してもよい。或いは、信号処理回路 3から遅延部 1 4, 16を取り除き、遅延検出部 17がデコーダ 13, 15にそれぞれ復号化する時間を 調整させる構成を採用してもよい。 The delay amounts DVl and DV2 calculated as described above are supplied to the first delay unit 14 and the second delay unit 16, respectively. The first delay unit 14 adds the first delay amount DV1 to the time stamp value of the MPEG image, and supplies the delayed image signal LSI to the signal output unit 18 at the timing of the added value. Similarly, the second delay unit 16 adds the second delay amount DV2 to the time stamp value of the H.264 image, and supplies the delayed image signal LS2 to the signal output unit 18 at the timing based on the added value. Become. As the time stamp, PTS (Presentation Time Stamp) defined by the MPEG2 standard and the H.264 standard can be used. [0045] As shown in FIG. 2, the delay units 14 and 16 delay the decoded image signals DS1 and DS2 according to the delay amounts DV1 and DV2, respectively. However, it is not necessary to limit the arrangement to the power arrangement arranged in the subsequent stage. For example, the delay unit 14 may be arranged before the first decoder 13 to adjust the time for decoding the elementary stream ES1, or the delay unit 16 may be arranged before the second decoder 15. The time for decoding the elementary list ES2 may be adjusted. Alternatively, a configuration may be adopted in which the delay units 14 and 16 are removed from the signal processing circuit 3 and the delay detection unit 17 adjusts the decoding times of the decoders 13 and 15, respectively.
[0046] なお、サイマルキャストされている固定受信機向けデジタル放送と移動体向けデジ タル放送との同時受信を開始する際に遅延量 DV1 , DV2がー度算出されれば、そ の後、原則として遅延検出部 17は遅延量 DV1, DV2の算出処理を行う必要がない 。ただし、復号画像信号 DS1, DS2間の同期ずれが変動し得る受信環境下では、定 期的に遅延量 DV1 , DV2を算出してもよい。  [0046] It should be noted that if the delay amounts DV1 and DV2 are calculated once when simultaneous reception of the digital broadcast for the fixed receiver and the digital broadcast for the mobile unit, which are simulcast, is started, in principle, As a result, the delay detection unit 17 does not need to calculate the delay amounts DV1 and DV2. However, the delay amounts DV1 and DV2 may be calculated periodically in a reception environment in which the synchronization error between the decoded image signals DS1 and DS2 may fluctuate.
[0047] 図 8は、信号出力部 18の概略構成を示す機能ブロック図である。この信号出力部 1 8は、アップコンバータ 30,フレーム補間部 31,画像分割部 32および画素ブロック選 択部 33を有している。これらアップコンバータ (解像度変換部) 30,フレーム補間部 3 1および画像分割部 32には、それぞれ、第 1デコーダ 13と第 2デコーダ 15から画像 パラメータ PA1, PA2が供給されている。  FIG. 8 is a functional block diagram showing a schematic configuration of the signal output unit 18. The signal output unit 18 includes an up-converter 30, a frame interpolation unit 31, an image division unit 32, and a pixel block selection unit 33. Image parameters PA1 and PA2 are supplied from the first decoder 13 and the second decoder 15 to the up-converter (resolution conversion unit) 30, the frame interpolation unit 31 and the image division unit 32, respectively.
[0048] アップコンバータ 30は、遅延画像信号 LS2の解像度を遅延画像信号 LSIの解像 度に変換して遅延画像信号 LS 2の画像サイズを遅延画像信号 LSIの画像サイズに 一致させる画素補間ブロックである。フレーム補間部 31は、アップコンバータ 30の出 力をフレーム補間し、遅延画像信号 LS2のフレームレートを遅延画像信号 LSIのフ レームレートに変換するフレームレート変換ブロックである。そして、画像分割部 32は 、フレーム補間部 31の出力をマクロブロックに分割して画素ブロック選択部 33に供給 する。  [0048] The up-converter 30 is a pixel interpolation block that converts the resolution of the delayed image signal LS2 into the resolution of the delayed image signal LSI and matches the image size of the delayed image signal LS2 with the image size of the delayed image signal LSI. is there. The frame interpolation unit 31 is a frame rate conversion block that interpolates the output of the up-converter 30 and converts the frame rate of the delayed image signal LS2 to the frame rate of the delayed image signal LSI. Then, the image division unit 32 divides the output of the frame interpolation unit 31 into macro blocks and supplies the macro blocks to the pixel block selection unit 33.
[0049] 画素ブロック選択部 33は、マクロブロックエラー情報 MBeに応じて、画像分割部 32 の出力信号と遅延画像信号 LSIとのいずれか一方をマクロブロック単位で選択し、 選択した信号を出力画像信号 CSとして供給する。具体的には、画素ブロック選択部 33は、遅延画像信号 LSIのマクロブロックにエラーが発生していないときは、遅延画 像信号 LSIのマクロブロックを選択して出力する一方、遅延画像信号 LSIのマクロブ ロックにエラーが発生しているときは、当該エラーが起きているマクロブロックに代えて 、画像分割部 32の出力信号中の対応するマクロブロックを選択して出力する。 [0049] The pixel block selection unit 33 selects one of the output signal of the image division unit 32 and the delayed image signal LSI in macroblock units according to the macroblock error information MBe, and outputs the selected signal as an output image. Supply as signal CS. Specifically, the pixel block selection unit 33, when there is no error in the macro block of the delayed image signal LSI, the macro block of the delayed image signal LSI is selected and output, while there is an error in the macro block of the delayed image signal LSI. In this case, instead of the macro block in which the error has occurred, the corresponding macro block in the output signal of the image dividing unit 32 is selected and output.
[0050] このように、エラーが起きているマクロブロックのみを、画像分割部 32の出力信号中 の対応するマクロブロックで置き換えるので、極力高品質且つ極力自然な出力画像 信号 CSを得ることが可能になる。エラーが発生した場合、図 9に示すように、出力画 像信号 CSが表すピクチャ 40は、固定受信機向け放送の高品質の第 1マクロブロック 群 41, 41 , 41と、移動体向け放送の低品質のマクロブロック群 42a, 42bとで構成さ れ、エラーが全く起きない場合のピクチャは、固定受信機向け放送の高品質のマクロ ブロックのみで構成されるので、極力高画質且つ自然で視覚的に良好な表示映像を 供給することが可能となる。  [0050] As described above, only the macro block in which an error has occurred is replaced with the corresponding macro block in the output signal of the image dividing unit 32, so that the output image signal CS that is as high as possible and as natural as possible can be obtained. become. When an error occurs, as shown in FIG. 9, the picture 40 represented by the output image signal CS is a high-quality first macroblock group 41, 41, 41 for broadcasting to a fixed receiver, and for broadcasting to a mobile body. A picture that is composed of low-quality macroblocks 42a and 42b and no errors occur at all is composed of only high-quality macroblocks for broadcasting to fixed receivers. It is possible to supply a good display image.
[0051] また、遅延検出部 17は、 MPEG画像と H. 264画像間の同期ずれを検出し、この 同期ずれを補償するための遅延量 DV1, DV2を遅延部 14, 16に供給するので、遅 延部 14, 16は、互いに同期した MPEG画像と H. 264画像とを信号出力部 18に供 給すること力 Sできる。  [0051] Since the delay detection unit 17 detects a synchronization shift between the MPEG image and the H.264 image and supplies delay amounts DV1 and DV2 for compensating for the synchronization shift to the delay units 14 and 16, The delay units 14 and 16 can supply the signal output unit 18 with MPEG images and H.264 images synchronized with each other.
[0052] さらに、動き検出部 19 (図 2)は、 MPEG画像と H. 264画像に、遅延量を算出し得 る程度に十分な動きがあるか否かを検出し、その検出結果 MDを一致検出部 24に 供給している。一致検出部 24は、検出結果 MDが肯定的なときに限り、遅延量 DV1 , DV2を生成するので、誤った遅延量 DV1 , DV2の発生を確実に回避することが可 能である。  [0052] Furthermore, the motion detector 19 (Fig. 2) detects whether there is enough motion in the MPEG image and H.264 image to calculate the delay amount, and the detection result MD is detected. Supplied to the coincidence detector 24. Since the coincidence detection unit 24 generates the delay amounts DV1 and DV2 only when the detection result MD is positive, it is possible to reliably avoid the generation of the erroneous delay amounts DV1 and DV2.
[0053] 次に、動き検出部 19 (図 2)で検出される動き量の具体的な算出法の例をいくつか 以下に説明する。第 1の算出法は、時間的に連続する複数枚の画像フレーム間の輝 度差の和を動き量として算出する方法である。具体的には、時間的に連続する 3枚の 画像フレーム P , P, Pを監視している場合に、動き検出部 19は、画像フレーム P,  [0053] Next, some specific methods for calculating the amount of motion detected by the motion detector 19 (Fig. 2) will be described below. The first calculation method is a method of calculating the sum of brightness differences between a plurality of temporally continuous image frames as a motion amount. Specifically, when three temporally continuous image frames P 1, P, and P are monitored, the motion detector 19 detects the image frames P, P,
0 1 2 0 0 1 2 0
P間の輝度差の総和 DB1と、画像フレーム P , P間の輝度差の総和 DB2とを動き量The amount of motion between the total brightness difference DB1 between P and the brightness difference sum DB2 between image frames P and P
1 1 1 1
として算出する。動き検出部 19は、動き量 DB1が所定の閾値 TH1以上で、且つ、動 き量 DB1が所定の閾値 TH2以上である場合に限り、動き量 DB1, DB2が所定範囲 外にあり、画像に動きがあると判定すること (ステップ SI 1, SI 3 ;図 6)が可能である。 Calculate as The motion detector 19 determines that the motion amounts DB1 and DB2 are within the predetermined range only when the motion amount DB1 is equal to or greater than the predetermined threshold TH1 and the motion amount DB1 is equal to or greater than the predetermined threshold TH2. It is possible to determine that there is motion and the image is outside (steps SI 1 and SI 3; FIG. 6).
[0054] 動き量の第 2の算出法は、スキップマクロブロックの数を利用する方法である。 MPE G2規格および H. 264規格によれば、たとえば Pピクチャの符号化の過程で、符号 化すべき現在の画像(カレント画像)のマクロブロック力 当該カレント画像よりも時間 的に前の画像のマクロブロックと同一であり、動きベクトルがゼロの場合、当該マクロ ブロックは符号化されず、スキップされる。このようなスキップマクロブロックの情報は、 第 1デコーダ 13および第 2デコーダ 15で取得される。このため、動き検出部 19は、第 1デコーダ 13および第 2デコーダ 15より供給される画像パラメータ PA1 , PA2からス キップマクロブロックの情報を取得し、スキップマクロブロックの数を動き量として算出 することができる。動き検出部 19は、たとえば 1画像フレーム内のスキップマクロプロ ックの数が所定の閾値よりも小さい場合には、動き量が所定範囲外にあり、画像に動 きがあると判定すること(ステップ Sl l , S13 ;図 6)が可能である。 [0054] The second calculation method of the motion amount is a method using the number of skip macroblocks. According to the MPE G2 standard and H.264 standard, for example, in the process of encoding a P picture, the macroblock power of the current image (current image) to be encoded Macroblock of the image temporally prior to the current image If the motion vector is zero, the macroblock is not encoded and is skipped. Such skip macroblock information is acquired by the first decoder 13 and the second decoder 15. Therefore, the motion detection unit 19 obtains skip macroblock information from the image parameters PA1 and PA2 supplied from the first decoder 13 and the second decoder 15, and calculates the number of skip macroblocks as a motion amount. Can do. For example, when the number of skip macro blocks in one image frame is smaller than a predetermined threshold, the motion detection unit 19 determines that the amount of motion is outside the predetermined range and the image is moving ( Step Sl l, S13; Fig. 6) is possible.
[0055] 動き量の第 3の算出法は、動きベクトルを利用する方法である。この第 3の算出法を 図 10のフローチャートを参照しつつ以下に説明する。動き検出部 19は、画像パラメ ータ PA1 , PA2から動きべクトノレ情報を取得できる。図 10を参照すると、ステップ S3 0では、動き検出部 19は、 1画像フレーム内において、動きべタトノレのノルムが閾値 V を超えたマクロブロックを検索する。ノルムが閾値を超えたマクロブロックが発見でき[0055] The third calculation method of the motion amount is a method using a motion vector. This third calculation method will be described below with reference to the flowchart of FIG. The motion detector 19 can acquire motion vector information from the image parameters PA1 and PA2. Referring to FIG. 10, in step S30, the motion detection unit 19 searches for a macroblock in which the norm of motion beta is greater than the threshold value V in one image frame. Macroblocks whose norm exceeds the threshold can be found
TH TH
ない場合は、動き検出部 19は、検索が失敗したと判定し (ステップ S31)、さらに画像 の動き量が所定範囲内にあると判定して (ステップ S37)、動き検出処理を終了させる 。他方、動き検出部 19は、ノルムが閾値を超えたマクロブロックを発見した場合、検 索が成功したと判定し (ステップ S31)、検索されたマクロブロックを中心にした局所領 域を設定する(ステップ S32)。この局所領域は、数個〜数百個のマクロブロックを含 む領域に設定すればよい。さらに、動き検出部 19は、局所領域内のマクロブロックの うち、動きベクトルのノルムが閾値 V を超えているマクロブロックの数を計数する(ス  If not, the motion detector 19 determines that the search has failed (step S31), further determines that the amount of motion of the image is within a predetermined range (step S37), and ends the motion detection process. On the other hand, if the motion detection unit 19 finds a macroblock whose norm exceeds the threshold, the motion detection unit 19 determines that the search is successful (step S31), and sets a local region centered on the searched macroblock ( Step S32). This local area may be set to an area including several to several hundred macroblocks. Furthermore, the motion detection unit 19 counts the number of macroblocks in which the norm of the motion vector exceeds the threshold value V among the macroblocks in the local region (score).
TH  TH
テツプ S33)。動き検出部 19は、その計数値が設定値以上か否かを判定する (ステツ プ S34)。  Step S33). The motion detector 19 determines whether or not the count value is equal to or greater than the set value (step S34).
[0056] その計数値が設定値以上であると判定したとき(ステップ S34)、動き検出部 19は、 局所領域内に移動オブジェクト(たとえば、変化しない背景の中を移動する物体の画 像)が存在すると判断して次のステップ S35に処理を移行させる。一方、その計数値 が設定値未満であると判定したとき (ステップ S34)、動き検出部 19は、局所領域内 に移動オブジェクトが存在しないと判断し、さらに画像の動き量が所定範囲内にある と判定して (ステップ S37)、動き検出処理を終了させる。 [0056] When it is determined that the count value is greater than or equal to the set value (step S34), the motion detection unit 19 moves a moving object (for example, an image of an object moving in a background that does not change) within the local region. Image) exists, the process proceeds to the next step S35. On the other hand, when it is determined that the count value is less than the set value (step S34), the motion detection unit 19 determines that there is no moving object in the local area, and the motion amount of the image is within a predetermined range. Is determined (step S37), and the motion detection process is terminated.
[0057] ステップ S35では、動き検出部 19は、局所領域内の全てのマクロブロックの動きべ タトルの角度が所定の角度範囲内(たとえば、約 ± 15度の範囲内)に収まるか否かを 判定する。全マクロブロックの動きベクトルの角度が所定の角度範囲内になければ、 動き検出部 19は、画像の動き量が所定範囲内にあると判定して (ステップ S37)、動 き検出処理を終了させる。他方、全てのマクロブロックの動きべタトノレの角度が所定の 角度範囲内にあれば、動き検出部 19は、移動オブジェクトは略一方向へ移動してい ると判断し、さらに、画像の動き量が所定範囲を超えていると判定して (ステップ S36) 、動き検出処理を終了させる。  [0057] In step S35, the motion detection unit 19 determines whether or not the angle of the motion vector of all macroblocks in the local region falls within a predetermined angle range (for example, within a range of about ± 15 degrees). judge. If the angles of the motion vectors of all macroblocks are not within the predetermined angle range, the motion detection unit 19 determines that the amount of motion of the image is within the predetermined range (step S37), and ends the motion detection process. . On the other hand, if the angle of the motion beta of all the macroblocks is within the predetermined angle range, the motion detection unit 19 determines that the moving object is moving in approximately one direction, and further, the amount of motion of the image is It is determined that the predetermined range is exceeded (step S36), and the motion detection process is terminated.
[0058] なお、上記ステップ S18 (図 6)の検索処理において、一致検出部 24は、上記ステツ プ S32で設定された局所領域のみについて基準フレーム間の情報量の差が最小に なる組み合わせを検索してもよい。これにより、局所領域以外の領域については基準 フレーム間の差を算出しなくて済むので、処理速度の向上が可能になる。  [0058] In the search process of step S18 (Fig. 6), the match detection unit 24 searches for a combination that minimizes the difference in the amount of information between the reference frames for only the local region set in step S32. May be. As a result, it is not necessary to calculate the difference between the reference frames for regions other than the local region, so that the processing speed can be improved.
[0059] 以上、本発明に係る種々の実施例について説明した。上記信号処理回路 3の構成 は、ハードウェアで実現されてもよいし、或いは、光ディスクなどの記録媒体に記録さ れたプログラムまたはプログラムコードで実現されてもよレ、。そのようなプログラムまた はプログラムコードは、 CPUなどのプロセッサに上記信号処理回路 3の機能の全部 または一部の処理を実行させるものである。  [0059] Various embodiments according to the present invention have been described above. The configuration of the signal processing circuit 3 may be realized by hardware, or may be realized by a program or program code recorded on a recording medium such as an optical disk. Such a program or program code causes a processor such as a CPU to execute all or part of the functions of the signal processing circuit 3.

Claims

請求の範囲 The scope of the claims
[1] サイマルキャストで送信された第 1のデジタル放送および第 2のデジタル放送の双 方の受信信号を処理するデジタル放送受信用の信号処理装置であって、  [1] A signal processing device for receiving a digital broadcast that processes both the received signals of the first digital broadcast and the second digital broadcast transmitted by simulcast,
第 1の符号化規格に従つて前記第 1のデジタル放送の受信信号を復号化して第 1 の復号画像信号を生成する第 1デコーダと、  A first decoder for decoding a received signal of the first digital broadcast according to a first encoding standard to generate a first decoded image signal;
前記第 1の符号化規格とは異なる第 2の符号化規格に従って前記第 2のデジタル 放送の受信信号を復号化して第 2の復号画像信号を生成する第 2デコーダと、 前記第 1の復号画像信号と前記第 2の復号画像信号とに基づいて出力画像信号を 生成しこれを出力する信号出力部と、を備え、  A second decoder for decoding a received signal of the second digital broadcast according to a second encoding standard different from the first encoding standard to generate a second decoded image signal; and the first decoded image A signal output unit that generates an output image signal based on the signal and the second decoded image signal and outputs the output image signal;
前記第 1デコーダは、前記受信信号の復号化過程で所定数の画素からなる画素ブ ロック単位でエラーを検出し、  The first decoder detects an error in a pixel block unit including a predetermined number of pixels in the decoding process of the received signal,
前記信号出力部は、前記エラーが検出されないときは、前記第 1の復号画像信号 の画素ブロックを出力する一方、前記エラーが検出されたときは、当該エラーが検出 された画素ブロックに代えて、前記第 2の復号画像信号中の対応する画素ブロックを 出力することにより前記出力画像信号を生成することを特徴とする信号処理装置。  When the error is not detected, the signal output unit outputs a pixel block of the first decoded image signal.When the error is detected, the signal output unit replaces the pixel block in which the error is detected. A signal processing apparatus that generates the output image signal by outputting a corresponding pixel block in the second decoded image signal.
[2] 請求項 1記載の信号処理装置であって、前記第 1のデジタル放送は、前記第 2のデ ジタル放送よりも高品質のデジタル放送であることを特徴とする信号処理装置。 2. The signal processing apparatus according to claim 1, wherein the first digital broadcast is a higher quality digital broadcast than the second digital broadcast.
[3] 請求項 1または 2記載の信号処理装置であって、前記信号出力部は、前記第 2の 復号画像信号の解像度を前記第 1の復号画像信号の解像度に変換する解像度変 換部を含むことを特徴とする信号処理装置。 [3] The signal processing device according to claim 1 or 2, wherein the signal output unit includes a resolution conversion unit that converts the resolution of the second decoded image signal to the resolution of the first decoded image signal. A signal processing apparatus comprising:
[4] 請求項 1から 3のうちのいずれ力 1項に記載の信号処理装置であって、前記信号出 力部は、前記第 2の復号画像信号の画像フレームを補間して前記第 2の復号画像信 号のフレームレートを前記第 1の復号画像信号のフレームレートに変換するフレーム 補間部を含むことを特徴とする信号処理装置。 [4] The signal processing device according to any one of claims 1 to 3, wherein the signal output unit interpolates an image frame of the second decoded image signal to perform the second operation. A signal processing apparatus comprising: a frame interpolation unit that converts a frame rate of a decoded image signal into a frame rate of the first decoded image signal.
[5] 請求項 1から 4のうちのいずれ力 1項に記載の信号処理装置であって、 [5] The signal processing device according to any one of claims 1 to 4, wherein:
前記第 1の復号画像信号と前記第 2の復号画像信号との間の同期ずれに相当する 遅延量を検出する遅延検出部と、  A delay detection unit that detects a delay amount corresponding to a synchronization shift between the first decoded image signal and the second decoded image signal;
前記遅延量に応じて前記第 1および第 2の復号画像信号の少なくとも一方を遅延さ せて前記第 1および第 2の復号画像信号を互いに同期させる遅延部と、をさらに備え ることを特徴とする信号処理装置。 At least one of the first and second decoded image signals is delayed according to the delay amount. And a delay unit that synchronizes the first and second decoded image signals with each other.
[6] 請求項 5記載の信号処理装置であって、前記第 1および第 2の復号画像信号の少 なくとも一方が表す画像の動き量が所定範囲外にあるか否かを検出する動き検出部 をさらに備え、 6. The signal processing apparatus according to claim 5, wherein the motion detection detects whether or not a motion amount of an image represented by at least one of the first and second decoded image signals is outside a predetermined range. Further comprising
前記遅延検出部は、前記動き検出部によって前記動き量が所定範囲外にあると検 出されたときに限り前記遅延量を検出することを特徴とする信号処理装置。  The signal processing apparatus according to claim 1, wherein the delay detection unit detects the delay amount only when the motion detection unit detects that the motion amount is outside a predetermined range.
[7] 請求項 5または 6記載の信号処理装置であって、 [7] The signal processing device according to claim 5 or 6,
前記遅延検出部は、  The delay detection unit
前記第 1の復号画像信号の解像度を前記第 2の復号画像信号の解像度に変換す る解像度変換部と、  A resolution converter that converts the resolution of the first decoded image signal to the resolution of the second decoded image signal;
前記解像度変換部で解像度変換された第 1の復号画像信号を構成する一連の画 像フレームから第 1の基準フレームを抽出し、且つ前記第 2の復号画像信号を構成 する一連の画像フレームから第 2の基準フレームを抽出する画像抽出部と、 前記第 1および第 2の基準フレーム間の差が最小になる 2枚の基準フレームの組み 合わせを検索し、当該検索された基準フレーム間の時間差に基づいて前記遅延量 を算出する一致検出部と、  A first reference frame is extracted from a series of image frames constituting the first decoded image signal whose resolution has been converted by the resolution converter, and a first reference frame is extracted from the series of image frames constituting the second decoded image signal. The image extraction unit that extracts the two reference frames and the combination of the two reference frames that minimize the difference between the first and second reference frames are searched, and the time difference between the searched reference frames is calculated. A coincidence detection unit that calculates the delay amount based on:
を含むことを特徴とする信号処理装置。  A signal processing apparatus comprising:
[8] 請求項 5または 6記載の信号処理装置であって、 [8] The signal processing device according to claim 5 or 6,
前記遅延検出部は、  The delay detection unit
前記第 1の復号画像信号の解像度を前記第 2の復号画像信号の解像度に変換す る解像度変換部と、  A resolution converter that converts the resolution of the first decoded image signal to the resolution of the second decoded image signal;
前記第 1の復号画像信号を構成する一連の画像フレームから第 1の基準フレーム を抽出し、且つ前記解像度変換部で解像度変換された第 2の復号画像信号を構成 する一連の画像フレームから第 2の基準フレームを抽出する画像抽出部と、 前記第 1および第 2の基準フレーム間の差が最小になる 2枚の基準フレームの組み 合わせを検索し、当該検索された基準フレーム間の時間差に基づいて前記遅延量 を算出する一致検出部と、 を含むことを特徴とする信号処理装置。 A first reference frame is extracted from a series of image frames constituting the first decoded image signal, and a second is extracted from the series of image frames constituting the second decoded image signal subjected to resolution conversion by the resolution converter. An image extraction unit that extracts a reference frame of the second frame, and a combination of two reference frames that minimizes a difference between the first and second reference frames, and based on a time difference between the searched reference frames. A coincidence detection unit for calculating the delay amount, A signal processing apparatus comprising:
[9] 請求項 7または 8記載の信号処理装置であって、前記画像抽出部は、予め設定さ れた時間窓内の画像フレームを前記第 2の基準フレームとして抽出しており、前記基 準フレームの組み合わせの検索に失敗したとき、前記画像抽出部は、前記時間窓を 拡大して前記第 2の基準フレームを抽出することを特徴とする信号処理装置。  [9] The signal processing device according to claim 7 or 8, wherein the image extraction unit extracts an image frame within a preset time window as the second reference frame, and the reference When the search for a combination of frames fails, the image extraction unit expands the time window and extracts the second reference frame.
[10] 請求項 1から 9のうちのいずれ力 1項に記載の信号処理装置であって、前記第 1の デジタル放送は固定受信機向けのデジタル放送であり、前記第 2のデジタル放送は 移動体向けのデジタル放送であることを特徴とする信号処理装置。  [10] The signal processing device according to any one of claims 1 to 9, wherein the first digital broadcast is a digital broadcast for a fixed receiver, and the second digital broadcast is mobile. A signal processing device characterized by being a digital broadcast for the body.
[11] サイマルキャストで送信された第 1のデジタル放送および第 2のデジタル放送を受 信するデジタル放送受信装置であって、  [11] A digital broadcast receiver for receiving the first digital broadcast and the second digital broadcast transmitted by simulcast,
前記第 1および第 2のデジタル放送の受信信号を供給する受信回路と、 請求項 1から 10のうちのいずれか 1項に記載の信号処理装置と、  A reception circuit that supplies reception signals of the first and second digital broadcasts, and the signal processing device according to any one of claims 1 to 10,
を備えることを特徴とするデジタル放送受信装置。  A digital broadcast receiving apparatus comprising:
[12] サイマルキャストで送信された第 1のデジタル放送および第 2のデジタル放送の双 方の受信信号を処理するデジタル放送受信用の信号処理方法であって、 [12] A signal processing method for receiving a digital broadcast that processes both the received signals of the first digital broadcast and the second digital broadcast transmitted by simulcast,
(a)第 1の符号化規格に従って前記第 1のデジタル放送の受信信号を復号化して 第 1の復号画像信号を生成するステップと、  (a) decoding a received signal of the first digital broadcast according to a first encoding standard to generate a first decoded image signal;
(b)前記第 1の符号化規格とは異なる第 2の符号化規格に従って前記第 2のデジタ ル放送の受信信号を復号化して第 2の復号画像信号を生成するステップと、  (b) decoding a received signal of the second digital broadcast according to a second encoding standard different from the first encoding standard to generate a second decoded image signal;
(c)前記ステップ (a)の復号化過程で所定数の画素からなる画素ブロック単位でェ ラーを検出するステップと、  (c) a step of detecting an error in units of a pixel block composed of a predetermined number of pixels in the decoding process of step (a);
(d)前記エラーが検出されないときは、前記第 1の復号画像信号の画素ブロックを 出力する一方、前記エラーが検出されたときは、当該エラーが検出された画素ブロッ クに代えて、前記第 2の復号画像信号中の対応する画素ブロックを出力することによ り出力画像信号を生成するステップと、  (d) When the error is not detected, the pixel block of the first decoded image signal is output, while when the error is detected, the pixel block in which the error is detected is replaced with the first block. Generating an output image signal by outputting a corresponding pixel block in the two decoded image signals;
を備えることを特徴とする信号処理方法。  A signal processing method comprising:
[13] サイマルキャストで送信された第 1のデジタル放送および第 2のデジタル放送の双 方の受信信号の処理をプロセッサに実行させるデジタル放送受信用の信号処理プロ グラムであって、前記処理は、 [13] A signal processing program for digital broadcast reception that causes the processor to process both the received signals of the first digital broadcast and the second digital broadcast transmitted by simulcast. And the process is
(a)第 1の符号ィヒ規格に従って前記第 1のデジタル放送の受信信号を復号化して 第 1の復号画像信号を生成する第 1復号化処理と、  (a) a first decoding process for decoding a received signal of the first digital broadcast according to a first coding standard to generate a first decoded image signal;
(b)前記第 1の符号化規格とは異なる第 2の符号化規格に従って前記第 2のデジタ ル放送の受信信号を復号化して第 2の復号画像信号を生成する第 2復号化処理と、 (b) a second decoding process for decoding a received signal of the second digital broadcast according to a second encoding standard different from the first encoding standard to generate a second decoded image signal;
(c)前記第 2復号化処理の過程で所定数の画素からなる画素ブロック単位でエラー を検出するエラー検出処理と、 (c) an error detection process for detecting an error in a pixel block unit composed of a predetermined number of pixels in the process of the second decoding process;
(d)前記エラーが検出されないときは、前記第 1の復号画像信号の画素ブロックを 出力する一方、前記エラーが検出されたときは、当該エラーが検出された画素ブロッ クに代えて、前記第 2の復号画像信号中の対応する画素ブロックを出力することによ り出力画像信号を生成する信号出力処理と、  (d) When the error is not detected, the pixel block of the first decoded image signal is output, while when the error is detected, the pixel block in which the error is detected is replaced with the first block. A signal output process for generating an output image signal by outputting a corresponding pixel block in the decoded image signal of 2.
を含むことを特徴とする信号処理プログラム。 A signal processing program comprising:
PCT/JP2006/322366 2005-11-21 2006-11-09 Signal processing device for receiving digital broadcast, signal processing method and signal processing program, and digital broadcast receiver WO2007058113A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007545214A JP4740955B2 (en) 2005-11-21 2006-11-09 Digital broadcast receiving signal processing apparatus, signal processing method, signal processing program, and digital broadcast receiving apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005335651 2005-11-21
JP2005-335651 2005-11-21

Publications (1)

Publication Number Publication Date
WO2007058113A1 true WO2007058113A1 (en) 2007-05-24

Family

ID=38048500

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/322366 WO2007058113A1 (en) 2005-11-21 2006-11-09 Signal processing device for receiving digital broadcast, signal processing method and signal processing program, and digital broadcast receiver

Country Status (2)

Country Link
JP (1) JP4740955B2 (en)
WO (1) WO2007058113A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009100424A (en) * 2007-10-19 2009-05-07 Fujitsu Ltd Receiving device and reception method
JP2012156795A (en) * 2011-01-26 2012-08-16 Fujitsu Ltd Image processing device and image processing method
US20130287118A1 (en) * 2012-04-27 2013-10-31 Fujitsu Limited Video image encoding device, video image encoding method, video image decoding device, and video image decoding method
JP2016517682A (en) * 2013-03-18 2016-06-16 クゥアルコム・インコーポレイテッドQualcomm Incorporated Disparity vector derivation and motion vector prediction simplification in 3D video coding

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003134064A (en) * 2001-10-26 2003-05-09 Hitachi Ltd Digital broadcast complementing method and digital broadcast reception system
JP2003274303A (en) * 2002-03-18 2003-09-26 Sony Corp Digital broadcast receiving device, onboard device, guiding method for digital broadcast reception, program for the guiding method for digital broadcast reception, and recording medium where the program for the guiding method for digital broadcast reception is recorded
JP2005260606A (en) * 2004-03-11 2005-09-22 Fujitsu Ten Ltd Digital broadcast receiver
JP2005311435A (en) * 2004-04-16 2005-11-04 Denso Corp Broadcast receiver for moving object and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003134064A (en) * 2001-10-26 2003-05-09 Hitachi Ltd Digital broadcast complementing method and digital broadcast reception system
JP2003274303A (en) * 2002-03-18 2003-09-26 Sony Corp Digital broadcast receiving device, onboard device, guiding method for digital broadcast reception, program for the guiding method for digital broadcast reception, and recording medium where the program for the guiding method for digital broadcast reception is recorded
JP2005260606A (en) * 2004-03-11 2005-09-22 Fujitsu Ten Ltd Digital broadcast receiver
JP2005311435A (en) * 2004-04-16 2005-11-04 Denso Corp Broadcast receiver for moving object and program

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009100424A (en) * 2007-10-19 2009-05-07 Fujitsu Ltd Receiving device and reception method
JP2012156795A (en) * 2011-01-26 2012-08-16 Fujitsu Ltd Image processing device and image processing method
US20130287118A1 (en) * 2012-04-27 2013-10-31 Fujitsu Limited Video image encoding device, video image encoding method, video image decoding device, and video image decoding method
JP2016517682A (en) * 2013-03-18 2016-06-16 クゥアルコム・インコーポレイテッドQualcomm Incorporated Disparity vector derivation and motion vector prediction simplification in 3D video coding

Also Published As

Publication number Publication date
JPWO2007058113A1 (en) 2009-04-30
JP4740955B2 (en) 2011-08-03

Similar Documents

Publication Publication Date Title
JP4615958B2 (en) Digital broadcast sending device, receiving device, and digital broadcasting system
JP6559298B2 (en) Data processing method and video transmission method
KR100707641B1 (en) Decoder apparatus
US20070126936A1 (en) Digital broadcasting receiving apparatus and receiving method
JP2007306363A (en) Digital broadcast receiver
JPH0879641A (en) Television receiver
US20130128956A1 (en) Apparatus and method for receiving signals
JP4616121B2 (en) Digital broadcast receiver
JP2009100424A (en) Receiving device and reception method
JP4740955B2 (en) Digital broadcast receiving signal processing apparatus, signal processing method, signal processing program, and digital broadcast receiving apparatus
JPH0898105A (en) Television receiver
KR100574703B1 (en) Apparatus for providing a video lip sync delay and method therefore
WO2007037424A1 (en) Receiver apparatus
JP4505020B2 (en) Interpolator
WO2009122668A1 (en) Digital broadcast transmission/reception device
JP2007074228A (en) Video program receiver for performing low-delay digital encoded video switching, and transmitting/receiving system
JP2010258732A (en) Video processing device, video processing method and video processing program
Mochida et al. An MMT module for 4K/120fps temporally scalable video
KR20030070411A (en) Digital Broadcast Receiver and method for compensating error of color appearance of the same
JP2008244781A (en) Ip retransmission system of terrestrial digital broadcast, and seamless switchover control method therefor in mottled composition
KR102228599B1 (en) Transmitter and receiver for providing seamless switching of isdb transport stream
JP5857840B2 (en) Encoder and control method
JP2008154062A (en) Broadcast receiver
JP2004072153A (en) Decoder, decoding method and digital broadcast receiver
JP4970059B2 (en) Digital broadcast receiver

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
ENP Entry into the national phase

Ref document number: 2007545214

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06823253

Country of ref document: EP

Kind code of ref document: A1