JP5726724B2 - Image transmitting apparatus and image receiving apparatus - Google Patents

Image transmitting apparatus and image receiving apparatus Download PDF

Info

Publication number
JP5726724B2
JP5726724B2 JP2011503294A JP2011503294A JP5726724B2 JP 5726724 B2 JP5726724 B2 JP 5726724B2 JP 2011503294 A JP2011503294 A JP 2011503294A JP 2011503294 A JP2011503294 A JP 2011503294A JP 5726724 B2 JP5726724 B2 JP 5726724B2
Authority
JP
Japan
Prior art keywords
image
unit
data
transmission
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
JP2011503294A
Other languages
Japanese (ja)
Other versions
JPWO2011027479A1 (en
Inventor
古藤 晋一郎
晋一郎 古藤
直人 伊達
直人 伊達
Original Assignee
株式会社東芝
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to JPPCT/JP2009/065356 priority Critical
Priority to PCT/JP2009/065356 priority patent/WO2011027440A1/en
Application filed by 株式会社東芝 filed Critical 株式会社東芝
Priority to JP2011503294A priority patent/JP5726724B2/en
Priority to PCT/JP2009/066832 priority patent/WO2011027479A1/en
Publication of JPWO2011027479A1 publication Critical patent/JPWO2011027479A1/en
Application granted granted Critical
Publication of JP5726724B2 publication Critical patent/JP5726724B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to image transmission and reception.

  In order to transmit high-definition video data in high-quality real-time in a limited transmission band such as wireless communication, low-delay image compression / decompression technology that can withstand fluctuations in transmission speed and resistance to transmission errors is required. It becomes.

  The video data processing apparatus described in Patent Document 1 subsamples an image according to a pixel phase, separates the image into a plurality of subsample images, and compresses each subsample image. According to this video data processing apparatus, even if some of the plurality of subsample images are lost due to an error, error concealment can be performed based on the remaining normal subsample images.

  In the image communication method described in Patent Document 2, an image is subjected to variable length compression in units of blocks, and the size of the compressed data is added to the compressed data of each block and transmitted. According to this image communication method, since the position of the compressed data of each block is known to the receiving side, random access is possible. That is, according to this image communication method, the propagation range of transmission errors can be suppressed to the block level.

JP-A-6-22288 Japanese Patent Laid-Open No. 11-18086

  The video data processing apparatus described in Patent Literature 1 does not assume wireless transmission of video data. When video data is transmitted wirelessly, the transmission environment such as the transmission error rate changes every moment. In general, sub-sampling processing reduces the spatial correlation of images, and thus reduces the efficiency of image compression using spatial direction prediction. Therefore, the subsample processing is suitable for a transmission environment with a high transmission error rate, but is not necessarily suitable for a transmission environment with a low transmission error rate. Since this video data processing apparatus performs sub-sample processing even when the transmission error rate is low, it is not suitable for highly efficient image compression. Also, error concealment based on sub-sample processing does not work in a transmission environment where the transmission error rate is extremely high (a plurality of sub-sample images are almost missing).

  The image communication method described in Patent Document 2 stores the compressed data size for each block in transmission data. Therefore, a large amount of information is required to express the compressed data size. In addition, when an error occurs in the area representing the compressed data size, random access to the compressed data of the block may fail, and the error may propagate beyond the block level.

  SUMMARY OF THE INVENTION Accordingly, an object of the present invention is to provide an image transmission / image reception technique for stably transmitting a high-quality video against changes in the transmission environment.

  An image transmission apparatus according to an aspect of the present invention includes: a division unit that divides an input image into predetermined units of images; (a) first video processing that generates the predetermined units of images as video data; and (b). A second video process for generating video data by compressing the predetermined unit image; and (c) performing a sub-sample process for separating the predetermined unit image into a plurality of sub-sample images according to a pixel phase, A third video process for generating each subsample image as video data, and (d) a subsample process for separating the predetermined unit image into a plurality of subsample images according to a pixel phase, and compressing each subsample image Then, any one of the fourth video processing for generating video data is applied to the predetermined unit image, the video processing unit for generating the video data, and the frame of the predetermined unit image , Information on the coordinates of the image in the predetermined unit, information for identifying subsamples / non-subsamples of the video data, and pixel phase information of the video data when subsample processing is performed on the video data And an output unit for outputting transmission data including information for identifying compression / non-compression of the video data, size information of the video data, and the video data.

  An image receiving apparatus according to another aspect of the present invention includes a frame number of an image of a predetermined unit, coordinate information of the image of the predetermined unit, information for identifying a subsample / non-subsample of video data, and the video data Transmission data including pixel phase information of the video data, information identifying compression / non-compression of the video data, size information of the video data, and the video data An input unit for inputting the video data, an extraction unit for extracting the video data from the transmission data according to the size information of the video data, and information for identifying compression / non-compression of the video data is compressed into the video data. Information indicating the video data sub-sample / non-sub-sample is included in the video data. A sub-sampling unit that performs reverse sub-sampling on the video data according to the pixel phase information to reconstruct the predetermined unit image, and Based on the coordinate information of the unit image, a display area determining unit that determines the display area of the predetermined unit image, and determining the display frame of the predetermined unit image based on the frame number of the predetermined unit image A display order determination unit.

  An image transmitting apparatus according to another aspect of the present invention includes an image input unit that inputs an image of a predetermined unit obtained by dividing an input image, and a format conversion unit that performs horizontal reduction conversion for each signal component of the image of the predetermined unit A pixel separation unit that performs pixel separation in a horizontal direction with respect to the image of the predetermined unit that has been subjected to the reduction conversion, and the image of the predetermined unit that has been subjected to the pixel separation is independent for each separated pixel phase. A compression unit that obtains compressed image data, a packetization unit that packetizes the compressed image data, and a transmission unit that transmits the packetized compressed image data. Compressed data; (2) information on the color space of the input image; (3) information on the reduction ratio in the horizontal direction regarding the compressed image data; (4) information on the pixel separation type; and (5) pixel separation phase. With information To feed.

  An image receiving apparatus according to another aspect of the present invention includes (1) compressed image data of a predetermined unit image obtained by dividing an input image or an image obtained by pixel separation of the predetermined unit image in the horizontal direction, and (2) the input A plurality of packets including image color space information, (3) horizontal reduction rate information, (4) pixel separation type information, and (5) pixel separation phase information relating to the compressed image data. A receiving unit for receiving, a depacket unit for extracting the information of (1) to (5) from the packet, a decompressing unit for decompressing the compressed image data, information on the pixel separation type, and information on the pixel separation phase And a format for expanding the pixel-combined image in the horizontal direction for each signal component in accordance with the information on the horizontal reduction ratio, and a pixel combination unit that performs pixel combination in the horizontal direction of the expanded image Comprises a section, an image output unit for outputting an image format conversion is performed.

  An image transmission apparatus according to another aspect of the present invention includes: a dividing unit that divides an input image frame into slice units; a compression unit that performs compression closed in units of the slices to generate compressed image data; and packetized In order to generate a data signal, (1) the compressed image data, (2) color space information indicating which color space of RGB, YCbCr 422 and YCbCr 444 is the color space of the input image frame, (3) Frame size information of the input image frame, (4) Whether the slice is an image of a predetermined unit of the input image frame, or one of the images obtained by separating the image of the predetermined unit into two partitions (5) Frame number information indicating the frame number to which the slice belongs, (6) The slice is displayed in the frame Display position information indicating a position to be obtained; (7) the slice is (A) a pixel separation mode in which the image of the predetermined unit is separated into pixels of even columns and pixels of odd columns; and (B) the image of the predetermined units. Separation mode for separating the first phase pixel and the second phase pixel of the checkerboard pattern, and (C) pixel separation for separating the image of the predetermined unit into a left half image and a right half image Pixel separation type information indicating which pixel separation mode the partition is based on, (8) partition number information indicating which of the two partitions the slice is, and (9) A packetizing unit that packetizes horizontal sub-sampling rate information indicating a horizontal sub-sampling rate of each signal component of the image in the predetermined unit; The encrypted data signals comprising a transmitter for wirelessly transmitting transmission channels in the frequency range from 57GHz to OFDM modulation to 66 GHz, a.

  An image receiving apparatus according to another aspect of the present invention includes: a receiving unit that receives a plurality of packet data by performing OFDM demodulation on a signal received through a wireless transmission channel within a frequency range from 57 GHz to 66 GHz; (1) image data compressed in slice units, (2) color space information indicating which color space of RGB, YCbCr 422 and YCbCr 444 is the color space of the frame of the image data, (3) Frame size information of the frame, (4) Whether the slice is an image of a predetermined unit of the frame, or one of images obtained by separating the image of the predetermined unit into two partitions, (5) frame number information indicating the number of the frame to which the slice belongs, and (6) the slice information. Display position information indicating a position at which a chair is to be displayed in the frame; (7) the slice is (A) a pixel separation mode in which the image of the predetermined unit is separated into even-numbered pixels and odd-numbered pixels; (B) a pixel separation mode for separating the image of the predetermined unit into a first phase pixel and a second phase pixel of the checkerboard pattern; and (C) a left half image and a right half of the predetermined unit image. Pixel separation type information indicating which pixel separation mode the partition is based on, and (8) a partition indicating which of the two partitions is the slice Number information and (9) horizontal subsampling rate information indicating a horizontal subsampling rate of each signal component of the predetermined unit of image. That includes a depacketizer, based on the information of (1) to (9), and a reconstruction unit for reconstructing an image of said predetermined unit with decompressing the image data.

  ADVANTAGE OF THE INVENTION According to this invention, the technique of the image transmission / image reception for transmitting a high quality video stably with respect to the fluctuation | variation of a transmission environment can be provided.

1 is a block diagram illustrating an image transmission apparatus according to a first embodiment. FIG. 2 is a block diagram showing details of a video processing unit 102 in FIG. 1. 1 is a block diagram showing an image receiving apparatus according to a first embodiment. The block diagram which shows the image transmitter which concerns on 2nd Embodiment. The block diagram which shows the image receiver which concerns on 2nd Embodiment. The block diagram which shows the image transmission apparatus which concerns on 3rd Embodiment. The block diagram which shows the image receiver which concerns on 3rd Embodiment. Explanatory drawing of the image of the predetermined unit in 1st Embodiment. Explanatory drawing of the subsample process which the 3rd video processing part 203 and the 4th video processing part 204 perform. Explanatory drawing of the subsample process which the 3rd video processing part 203 and the 4th video processing part 204 perform. The figure which shows an example of the transmission data in 1st Embodiment. Explanatory drawing of the reverse subsample process which the reverse subsample part 304 performs. Explanatory drawing of the reverse subsample process which the reverse subsample part 304 performs. The figure which shows an example of the transmission data in 2nd Embodiment. The figure which shows an example of the transmission data in 3rd Embodiment. The figure which shows an example of the transmission data in 4th Embodiment. The block diagram of the image transmitter which concerns on 5th Embodiment. The block diagram of the image receiver which concerns on 5th Embodiment. The figure which shows the example of the transmission data structure which concerns on 5th Embodiment. The figure which shows the example of the slice header which concerns on 5th Embodiment. The figure which shows the example of the information of the pixel separation type which concerns on 5th Embodiment. The figure which shows the example of the information of the reduction rate which concerns on 5th Embodiment. The figure which shows the example of the pixel separation which concerns on 5th Embodiment. The figure which shows the example of the pixel separation which concerns on 5th Embodiment. The figure which shows the example of the pixel separation which concerns on 5th Embodiment. The figure which shows the example of the slice identifier which concerns on 5th Embodiment. The figure which shows the example of the information of the transmission image format which concerns on 5th Embodiment. FIG. 10 is a diagram illustrating an example of color space information according to a fifth embodiment. FIG. 14 is a diagram illustrating an example of information indicating validity / invalidity of pixel separation according to the fifth embodiment. FIG. 10 is a diagram illustrating an example of a color space of compressed data according to a fifth embodiment. The figure which shows the example of the block number coefficient of the compressed data which concerns on 5th Embodiment. FIG. 3 is a block diagram illustrating an example of a pixel combination unit 25. FIG. 10 is a diagram for explaining an effect of pixel separation according to a fifth embodiment. FIG. 10 is a diagram for explaining an effect of pixel separation according to a fifth embodiment. FIG. 10 is a diagram for explaining an effect of pixel separation according to a fifth embodiment. The figure explaining the effect of the reduction conversion which concerns on 5th Embodiment.

  Embodiments of the present invention will be described below with reference to the drawings. Note that the term “image compression / decompression” in the present specification may be understood by replacing the term “image encoding / decoding”.

(First embodiment)
As illustrated in FIG. 1, the image transmission device 100 according to the first embodiment of the present invention includes a dividing unit 101, a video processing unit 102, and an output unit 103. The image transmission device 100 generates transmission data 105 from the input image 104.

  The dividing unit 101 divides the input image 104 into images of a predetermined unit. Here, the input image 104 is, for example, an image for one frame included in a video to be transmitted. The dividing unit 101 divides this input image 104 spatially. For example, as illustrated in FIG. 8, the dividing unit 101 divides the input image 104 into a plurality of predetermined unit images. In FIG. 8, an image of a predetermined unit is a pixel set in an area having a width of half of one frame and a height of 16 pixels. The predetermined unit image is not limited to that shown in FIG. The dividing unit 101 inputs the predetermined unit image, the frame number of the predetermined unit image, and the coordinate information of the predetermined image to the video processing unit 102. The frame number identifies the frame (temporal position) to which the predetermined unit image belongs, and the coordinate information is used to identify the area (spatial position) occupied by the predetermined unit image in the frame.

  The video processing unit 102 applies any one of first to fourth video processing described later to a predetermined unit of image, and generates video data. The video processing unit 102 selects the first to fourth video processes according to the transmission speed and the transmission error rate, and applies them to an image of a predetermined unit. The selection of the video processing in the video processing unit 102 may be made before application of the first to fourth video processing, or may be made after application of the first to fourth video processing. The transmission rate and transmission error rate can be obtained using any existing method.

As shown in FIG. 2, the video processing unit 102 includes a first video processing unit 201, a second video processing unit 202, a third video processing unit 203, and a fourth video processing unit 204.
The first video processing unit 201 performs first video processing on a predetermined unit image 205. The first video processing does not include compression and sub-sampling processing for the predetermined unit image 205. That is, the first video processing unit 201 generates a predetermined unit image 205 as video data 208 as it is. The video data 208 generated by the first video processing has a large data amount and low error resistance. Therefore, the first video processing is suitable in a transmission environment where the transmission speed 206 is high and the transmission error rate 207 is low.

  The second video processing unit 202 performs second video processing on a predetermined unit of the image 205. The second video processing unit 202 performs compression processing on a predetermined unit of the image 205 to generate video data 208. Note that the second video processing does not include sub-sample processing. The compression method, compression rate, etc. in the second video processing are arbitrary. The video data 208 generated by the second video processing has a small data amount and low error resistance. Therefore, the second video processing is suitable in a transmission environment where the transmission speed 206 and the transmission error rate 207 are low.

  The third video processing unit 203 performs third video processing on a predetermined unit of the image 205. The third video processing includes sub-sample processing for a predetermined unit of image 205. That is, the third video processing unit 203 separates a predetermined unit of the image 205 into a plurality of subsample images according to the pixel phase, and generates each subsample image as video data 208. Note that the third video processing does not include compression processing. The method of subsample processing in the third video processing is arbitrary. The video data 208 generated by the third video processing has a large data amount and high error resistance. Therefore, the third video processing is suitable in a transmission environment where the transmission speed 206 and the transmission error rate 207 are high.

  The fourth video processing unit 204 performs fourth video processing on the predetermined unit of the image 205. The fourth video processing includes sub-sample processing and compression processing for the image 205 in a predetermined unit. That is, the fourth video processing unit 204 separates a predetermined unit of the image 205 into a plurality of subsample images according to the pixel phase, and performs compression processing for each subsample image to generate video data 208. The sub-sample processing method, compression method, compression rate, etc. in the fourth video processing are arbitrary. The video data 208 generated by the fourth video processing has a small data amount and high error resistance. Therefore, the fourth video processing is suitable in a transmission environment where the transmission speed 206 is low and the transmission error rate 207 is high.

  Although it has been described that the sub-sampling process is suitable in a transmission environment with a high transmission error rate 207, there is a possibility that error concealment based on the sub-sampling process does not function on the receiving side when the transmission error rate 207 is extremely high. In such a case, the video processing unit 102 may apply video processing assuming data retransmission. For example, the video processing unit 102 generates the video data 208 by executing the second or fourth video processing at a higher compression rate than usual, and temporarily stores it in a buffer (not shown) for retransmission. Good.

  Hereinafter, a specific example of the subsample processing in the third and fourth video processing will be described. Note that the third video processing unit 203 and the fourth video processing unit 204 may apply any one of the sub-sample processes in a fixed manner, or adaptively select one of the plurality of sub-sample processes. And may be applied. When a plurality of subsample processes can be switched, information for identifying the applied subsample processes may be stored in transmission data 105 described later.

  For example, as shown in FIGS. 9A and 9B, in the sub-sample processing, four sub-sample images may be generated by evenly and oddly separating pixels in a predetermined unit of image in the horizontal direction and the vertical direction. In other words, an intersection of a sub-sample image composed of intersection pixels of odd-numbered horizontal lines and odd-numbered vertical lines in an image of a predetermined unit, and odd-numbered horizontal lines and even-numbered vertical lines. The intersection of the sub-sample image composed of pixels and the even-numbered horizontal lines and the odd-numbered vertical lines, and the intersection of the even-numbered horizontal lines and the even-numbered vertical lines A sub-sample image composed of pixels may be generated. In addition, this sub-sampling process can also be regarded as generating four sub-sample images by separating pixels according to the relative position in units of 2 × 2 pixel blocks. FIG. 9A shows an image of a predetermined unit before the sub-sample processing. In FIG. 9A, intersection pixels of odd-numbered horizontal lines and odd-numbered vertical lines are triangular marks, and intersection pixels of odd-numbered horizontal lines and even-numbered vertical lines are rhombus marks, even-numbered horizontal lines. The intersection pixels of the odd-numbered vertical lines are indicated by square marks, and the intersection pixels of the even-numbered horizontal lines and even-numbered vertical lines are indicated by circles. In FIG. 9A, the coordinates of pixel 0 are (0, 0), and the coordinates of pixel 31 are (7, 3). When the predetermined unit image shown in FIG. 9A is subsampled according to the above-described rules, four subsample images shown in FIG. 9B are generated. In FIG. 9A and FIG. 9B, the same pixels are denoted by the same reference numerals.

  In the subsample processing, two subsample images may be generated by evenly and oddly separating pixels in a predetermined unit of image in the horizontal direction, the vertical direction, or the oblique direction (checkerboard type).

  The video processing unit 102 identifies the frame number of the predetermined unit image, the coordinate information of the predetermined unit image, and the subsample / non-subsample together with the video data 208 obtained by applying the first to fourth video processes. Information, pixel phase information, information for identifying compression / non-compression, and size information of the video data 208 are input to the output unit 103. The information for identifying the subsample / non-subsample indicates whether or not the subsample processing is performed to generate the video data 208. Further, when switching between a plurality of subsample processes, the identification information of each subsample process is included in the information identifying the subsample / non-subsample. The pixel phase information indicates a subsample image represented by the video data 208 when the subsample processing is performed. The information for identifying compression / non-compression indicates whether or not compression processing is being performed in order to generate the video data 208. The size information of the video data 208 indicates the total size of the video data 208, for example.

  The output unit 103 arranges the data format from the video processing unit 102 to generate transmission data 105 and outputs it. The transmission data 105 is transmitted to the image receiving apparatus via a transmission path (for example, a wireless transmission path) whose transmission speed and transmission error rate change over time. The format of the transmission data 105 is, for example, as shown in FIG. In the format of FIG. 10, the position of each field may be changed as appropriate, and information not shown may be added.

  As described above, the image transmission apparatus according to the present embodiment switches application / non-application of video processing such as compression processing and subsample processing based on a transmission rate and a transmission error rate that change with time. Therefore, according to the image transmitting apparatus according to the present embodiment, high-quality video can be stably transmitted with respect to fluctuations in the transmission speed or transmission error rate.

  As illustrated in FIG. 3, the image receiving apparatus 300 according to the present embodiment includes an input unit 301, a video data extraction unit 302, a decompression unit 303, an inverse subsample unit 304, a display area determination unit 305, and a display order determination unit 306. Have. The image receiving device 300 generates an output video 307 from the transmission data 105. The transmission data 105 is transmitted from, for example, the image transmission device 100 in FIG. 1 via a wireless transmission path.

  The input unit 301 inputs the transmission data 105 to the video data extraction unit 302. Note that, when an error is detected in the transmission data 105, the input unit 301 may request the transmission side (such as the image transmission device 100) to retransmit part or all of the transmission data 105. In order to detect an error, for example, an existing error correction code may be used.

  The video data extraction unit 302 extracts video data 208 from the transmission data 105 according to the size information of the video data included in the transmission data 105. If the information identifying the compression / non-compression in the transmission data 105 indicates that the video data 208 is compressed, the video data 208 is input to the decompression unit 303. Information identifying compression / non-compression in the transmission data 105 indicates that compression processing has not been performed on the video data 208, and information identifying sub-sample / non-sub-sample is sub-sample processing on the video data 208. That the video data 208 is input to the inverse sub-sample unit 304. Information identifying compression / non-compression in the transmission data 105 indicates that compression processing has not been performed on the video data 208, and information identifying sub-sample / non-sub-sample is sub-sample processing on the video data 208. If the video data 208 is not displayed, the video data 208 is input to the display area determination unit 305 as an image of a predetermined unit.

  The decompressing unit 303 decompresses the input video data. Note that the decompression process performed by the decompression unit 303 corresponds to the compression process applied on the transmission side (the image transmission apparatus 100 or the like). If the information identifying the subsample / non-subsample in the transmission data 105 indicates that the subsample processing is being performed on the video data 208, the decompressed video data is input to the reverse subsampler 304, and so on. Otherwise, the expanded video data is input to the display area determining unit 305 as a predetermined unit image.

  The inverse subsample unit 304 receives video data corresponding to a subsample image obtained by pixel-separating a predetermined unit image. The inverse sub-sampling unit 304 performs inverse sub-sampling processing (pixel integration) on a plurality of video data according to pixel phase information in the transmission data 105 to reconstruct a predetermined unit image. If some subsample images are missing due to an error, the inverse subsampler 304 may interpolate the missing pixels (spatially or temporally) based on neighboring normal pixels. Good. Note that the inverse subsample processing performed by the inverse subsampler 304 corresponds to the subsample processing applied on the transmission side (image transmission apparatus 100 or the like). For example, if sub-sampling processing as shown in FIGS. 9A and 9B is performed on the transmission side, the inverse sub-sampling unit 304 reconstructs an image of a predetermined unit shown in FIG. 11B from the sub-sample image shown in FIG. 11A. do it. When a plurality of subsample processes are switched on the transmission side, the contents of the reverse subsample process are determined according to the identification information of the subsample process included in the information identifying the subsample / non-subsample. The inverse subsample unit 304 inputs the reconstructed image of a predetermined unit to the display area determination unit 305.

  The display area determination unit 305 determines the display area (spatial position) of the predetermined unit image according to the coordinate information of the predetermined unit image in the transmission data 105 (see, for example, FIG. 8). The display order determination unit 306 determines the display frame of the predetermined unit image according to the frame number of the predetermined unit image in the transmission data 105, and generates the output video 307. The output video 307 is output to a display device such as a television.

  As described above, the image receiving apparatus according to the present embodiment corresponds to the image transmitting apparatus according to the present embodiment. Therefore, according to the image receiving apparatus according to the present embodiment, it is possible to stably output a high-quality video with respect to fluctuations in transmission speed or transmission error rate.

(Second Embodiment)
As illustrated in FIG. 4, the image transmission device 400 according to the second embodiment of the present invention includes a block division unit 401, a compression unit 402, and an output unit 403. The image transmission device 400 performs compression processing on a predetermined unit of the image 404 to generate transmission data 406. The image transmission device 400 may be appropriately incorporated as a part of the second video processing unit 202 or the fourth video processing unit 204 in the image transmission device 100 of FIG.

  The block division unit 401 divides a predetermined unit of the image 404 to generate a plurality of image blocks. The shape and size of the image block are not limited. The block division unit 401 inputs a plurality of image blocks to the compression unit 402.

  The compression unit 402 compresses the image block according to the predetermined size 405. The predetermined size 405 is a parameter for designating the data amount after compression of each image block. That is, the compression unit 402 compresses the image block so as not to exceed the predetermined size 405. The predetermined size 405 may be a variable value or a fixed value. If the predetermined size 405 is a variable value, it may be changed so as to follow the fluctuation of the transmission rate. The compression unit 402 inputs compressed data of an image block (hereinafter simply referred to as block compressed data) to the output unit 403.

  As shown in FIG. 12, the output unit 403 outputs video data including a group of a plurality of block compressed data corresponding to a predetermined unit image 404 and the predetermined size information, and the total size information of the video data as transmission data 406. Generate as The predetermined size information is information indicating the predetermined size 405. The predetermined size information may be handled as a part of the size information of the video data instead of a part of the video data. The total size information is information relating to the total size of the video data. The transmission data 406 is transmitted to the image receiving apparatus via a wireless transmission path, for example. The image receiving apparatus can access (random access) arbitrary block compressed data based on the predetermined size information and the total size information. Specifically, the image receiving apparatus can extract a group of a plurality of block compressed data based on the total size information, and can extract individual block compressed data based on the predetermined size information.

  As described above, the image transmission apparatus according to the present embodiment compresses a plurality of image blocks obtained by dividing a predetermined unit of image according to a predetermined size, and transmits the compressed image blocks together with the predetermined size information and total size information. Therefore, according to the image transmitting apparatus according to the present embodiment, random access to individual block compressed data is possible, so that the error propagation range can be suppressed to the block level. That is, even if the transmission error rate temporarily increases, a high quality image can be stably transmitted.

  As illustrated in FIG. 5, the image reception device 500 according to the present embodiment includes an input unit 501, a separation unit 502, and a decompression unit 503. The image receiving device 500 generates an output image 504 from the transmission data 406. The transmission data 406 is transmitted from, for example, the image transmission device 400 in FIG. 4 via a wireless transmission path. Note that the image receiving device 500 may be appropriately incorporated as a part of the decompression unit 303 in the image receiving device 300 of FIG.

  The input unit 501 inputs the transmission data 406 to the separation unit 502. Note that when an error is detected in the transmission data 406, the input unit 501 may request the transmission side (image transmission apparatus 400) to retransmit part or all of the transmission data 406.

  Based on the total size information in the transmission data 406, the separation unit 502 separates the predetermined size information from the block compressed data group. The separation unit 502 inputs the predetermined size information and the block compressed data group to the decompression unit 503.

  The decompressing unit 503 determines the position (for example, start position) of each block compressed data in the block compressed data group based on the predetermined size information. Since each block compressed data is assigned a predetermined size, the decompression unit 503 can uniquely identify the position of each block compressed data. The decompressing unit 503 decompresses individual block compressed data and generates an output image 504. Note that the decompression processing performed by the decompression unit 503 corresponds to the compression processing applied on the transmission side (image transmission apparatus 400 or the like).

  If the decompression unit 503 detects an error in a part of the block compressed data, the decompressing unit 503 discards the block compressed data and determines the corresponding image block (in terms of space or time) based on a normal image block in the vicinity. Interpolation may be performed. If the decompression unit 503 detects an error in a part of the block compressed data, the decompressing unit 503 discards the block compressed data, requests the transmission side to retransmit the block compressed data, and retransmits the block compressed data. May be stretched. These two types of processing may be switched based on the transmission speed, for example. That is, the decompression unit 503 may perform image block interpolation if the transmission rate is low, and may make a retransmission request if the transmission rate is high.

  As described above, the image receiving apparatus according to the present embodiment corresponds to the image transmitting apparatus according to the present embodiment. Therefore, according to the image receiving apparatus according to the present embodiment, random access to individual block compressed data is possible, so that the error propagation range can be suppressed to the block level. That is, even if the transmission error rate temporarily increases, a high-quality image can be output stably.

(Third embodiment)
As illustrated in FIG. 6, an image transmission apparatus 600 according to the third embodiment of the present invention includes a block division unit 601, a compression unit 610, and an output unit 603. The image transmission device 600 performs compression processing on the image 604 in a predetermined unit to generate transmission data 606. The image transmission device 600 may be appropriately incorporated as part of the second video processing unit 202 or the fourth video processing unit 204 in the image transmission device 100 of FIG.

  The block dividing unit 601 divides a predetermined unit image 604 to generate a plurality of image blocks. The shape and size of the image block are not limited. The block division unit 601 inputs a plurality of image blocks to the compression unit 610.

  The compression unit 610 compresses the image block according to a predetermined size 605. The predetermined size 605 is a parameter for designating the data amount after compression of each image block. That is, the compression unit 602 compresses the image block so as not to exceed the predetermined size 605. The predetermined size 605 may be a variable value or a fixed value. If the predetermined size 605 is a variable value, it may be changed so as to follow the fluctuation of the transmission rate. The compression unit 610 can select and apply any one of a plurality of compression methods in units of blocks. For example, as illustrated in FIG. 6, the compression unit 610 includes, for example, a first compression unit 611, a second compression unit 612, and a third compression unit 613.

  The first compression unit 611 applies first compression processing (reversible compression) such as DPCM (Differential Pulse Code Modulation) for compressing the inter-pixel difference to the image block. The second compression unit 612 applies a second compression process (irreversible compression) that uses energy concentration in a low-frequency region associated with orthogonal transform such as discrete cosine transform (DCT) to the image block. The third compression unit 613 applies a so-called color palette third compression process (fixed-length pixel unit compression) to the image block. The block compression data generated by the color palette method includes a correspondence table (color palette) of pixel values and index numbers, and an index number given to each pixel in the block.

  The compression unit 610 may select a compression method according to the properties of the image block. For example, in general, the second compression process is preferable for the natural image compared to the third compression process, and the artificial image (for example, computer graphics) is generally the third compression process compared to the second compression process. The compression process is suitable. The compression unit 610 may apply the first to third compression processes on a trial basis and select a compression process that minimizes the compression distortion. As shown in FIG. 13, the compression unit 610 adds a compression method identifier to compressed data obtained by performing compression processing on an image block, and inputs the compressed data to the output unit 603 as block compressed data.

  As shown in FIG. 13, the output unit 603 transmits video data including a group of a plurality of block compressed data corresponding to a predetermined unit image 604 and the predetermined size information, and the total size information of the video data as transmission data 606. Generate as The predetermined size information is information indicating the predetermined size 605. Note that the predetermined size information may be understood not as a part of the video data but as a part of the size information of the video data. The total size information is information relating to the total size of the video data. The transmission data 606 is transmitted to the image receiving device via a wireless transmission path, for example. The image receiving apparatus can access (random access) arbitrary block compressed data based on the predetermined size information and the total size information. Specifically, the image receiving apparatus can extract a group of a plurality of block compressed data based on the total size information, and can extract individual block compressed data based on the predetermined size information.

  As described above, the image transmission apparatus according to the present embodiment compresses a plurality of image blocks obtained by dividing a predetermined unit of image according to a predetermined size, and transmits the compressed image blocks together with the predetermined size information and total size information. Therefore, according to the image transmitting apparatus according to the present embodiment, random access to individual block compressed data is possible, so that the error propagation range can be suppressed to the block level. That is, even if the transmission error rate temporarily increases, a high quality image can be stably transmitted. Further, the image transmission apparatus according to the present embodiment can switch a plurality of compression methods in units of blocks. Therefore, according to the image transmission apparatus according to the present embodiment, it is possible to apply compression processing suitable for the properties of individual image blocks.

  As illustrated in FIG. 7, the image receiving apparatus 700 according to the present embodiment includes an input unit 701, a separation unit 702, and a decompression unit 710. The image receiving device 700 generates an output image 604 from the transmission data 606. The transmission data 606 is transmitted from the image transmission device 600 in FIG. 6 or the like via a wireless transmission path, for example. Note that the image receiving apparatus 700 may be appropriately incorporated as part of the decompression unit 303 in the image receiving apparatus 300 of FIG.

  The input unit 701 inputs the transmission data 606 to the separation unit 702. When an error is detected in the transmission data 606, the input unit 701 may request the transmission side (image transmission apparatus 600) to retransmit part or all of the transmission data 606.

  The separation unit 702 separates the predetermined size information and the block compressed data group based on the total size information in the transmission data 606. The separation unit 702 inputs the predetermined size information and the block compressed data group to the decompression unit 710.

  The decompression unit 710 determines the position (for example, start position) of each block compressed data in the block compressed data group based on the predetermined size information. A predetermined size is assigned to each block compressed data, and the decompression unit 710 can uniquely identify the position of each block compressed data. The decompression unit 710 decompresses individual block compressed data according to the compression method identifier, and generates an output image 704. The decompression process performed by the decompression unit 710 corresponds to the compression process applied on the transmission side (image transmission apparatus 600 or the like). For example, as illustrated in FIG. 7, the expansion unit 710 includes a first expansion unit 711, a second expansion unit 712, and a third expansion unit 713.

  The first decompression unit 711 corresponds to the first compression unit 611 in FIG. That is, the first decompression unit 711 applies the first decompression process corresponding to the first compression process to the block compressed data, and generates an output image 704. The second decompression unit 712 corresponds to the second compression unit 612 in FIG. In other words, the second decompression unit 712 applies the second decompression process corresponding to the second compression process to the block compressed data, and generates an output image 704. The third decompressing unit 713 corresponds to the third compressing unit 613 in FIG. In other words, the third decompression unit 713 applies the third decompression process corresponding to the third compression process to the block compressed data, and generates an output image 704.

  When the decompression unit 710 detects an error in a part of the block compressed data, the decompressing unit 710 discards the block compressed data, and determines the corresponding image block (spatially or temporally) based on a normal image block nearby. Interpolation may be performed. If the decompression unit 710 detects an error in a part of the block compressed data, the decompressing unit 710 discards the block compressed data, requests the transmission side to retransmit the block compressed data, and retransmits the block compressed data. May be stretched. These two types of processing may be switched based on the transmission speed, for example. That is, the decompression unit 710 may perform image block interpolation when the transmission rate is low, and may make a retransmission request when the transmission rate is high.

  As described above, the image receiving apparatus according to the present embodiment corresponds to the image transmitting apparatus according to the present embodiment. Therefore, according to the image receiving apparatus according to the present embodiment, random access to individual block compressed data is possible, so that the error propagation range can be suppressed to the block level. That is, even if the transmission error rate temporarily increases, a high-quality image can be output stably. Further, the image receiving apparatus according to the present embodiment switches and applies a plurality of decompression methods in units of blocks according to the compression method identifier. Therefore, according to the image receiving apparatus according to the present embodiment, it is possible to output a high-quality image suitable for the properties of individual image blocks.

(Fourth embodiment)
The image transmission apparatus 800 according to the fourth embodiment of the present invention corresponds to a configuration in which the compression unit 402 is replaced with another compression unit 810 in the image transmission apparatus 400 of FIG. In the following description, the description will focus on the differences between the compression unit 810 and the compression unit 402. The image transmission device 800 may be appropriately incorporated as a part of the second video processing unit 202 or the fourth video processing unit 204 in the image transmission device 100 of FIG.

  The compression unit 810 performs the same compression process as the compression unit 402. That is, the image block is compressed so as not to exceed the predetermined size 405. At this time, the size of the block compressed data does not necessarily match the predetermined size 405, and a remaining area (padding area) may occur. This remaining area is usually filled with meaningless padding bits. As shown in FIG. 14, the compression unit 810 stores a predetermined bit pattern in this remaining area. The predetermined bit pattern is arbitrary, but may be, for example, a list of “0” or a list of “1”, and a feature amount such as a frame number of a predetermined unit image, coordinate information of an image of a predetermined unit, etc. (A higher bit, a lower bit, etc.). However, the predetermined bit pattern is generated according to a common rule between the transmission side (image transmission device) and the reception side (image reception device).

  As described above, the image transmission apparatus according to the present embodiment stores the bit pattern generated according to the common rule in the remaining area that is less than the predetermined size in each block compressed data. Therefore, according to the image transmission apparatus according to the present embodiment, an error can be determined by matching / mismatching of bit patterns between transmission and reception. That is, even if the transmission error rate temporarily increases, a high quality image can be stably transmitted.

  The image receiving apparatus 900 according to the present embodiment corresponds to a configuration in which the expansion unit 503 is replaced with another expansion unit 910 in the image reception apparatus 500 of FIG. In the following description, a description will be given focusing on a different part between the expansion unit 910 and the compression unit 503.

  The expansion unit 910 performs the same expansion process as the expansion unit 503. Further, the decompressing unit 910 generates a bit pattern according to a common rule with the image transmitting apparatus 800 if the decompressed data of the image block does not satisfy the predetermined size. The decompression unit 910 collates the generated bit pattern with the bit pattern stored in the block compressed data. If the two do not match, the decompression unit 910 detects an error in the block compressed data. The decompression unit 910 also detects an error in the block compressed data even when the decompressed data of the image block exceeds a predetermined size. Note that the decompression unit 910 executes interpolation of image blocks, a retransmission request, and the like in the same manner as the decompression unit 503 when detecting an error in the block compressed data.

  As described above, the image receiving apparatus according to the present embodiment corresponds to the image transmitting apparatus according to the present embodiment. Therefore, according to the image receiving apparatus according to the present embodiment, an error can be determined by matching / mismatching of bit patterns between transmission and reception. That is, even if the transmission error rate temporarily increases, a high-quality image can be output stably.

(Fifth embodiment)
FIG. 15 is a block diagram of an image transmission apparatus according to the fifth embodiment of the present invention. The input image 11 is input to the image input unit 12. The image input unit 11 inputs the input image 11 to the format conversion unit 13 for each predetermined unit image (for example, an image region having a predetermined number of continuous lines). The format conversion unit 13 performs horizontal reduction conversion for each signal component of an image in a predetermined unit. The format conversion unit 13 may further perform color space conversion (for example, conversion from RGB to YCbCr). An image of a predetermined unit that has been subjected to reduction conversion in the horizontal direction is separated into pixels having a plurality of phases in the horizontal direction by the pixel separation unit 14. The compression unit 15 performs compression processing (for example, compression using DCT (discrete cosine transform)) on a predetermined unit of image independently for each separated pixel phase. The compressed data is packetized by the packetizing unit 16. The packetized data is, for example, OFDM-modulated by the transmission unit 17 and transmitted as transmission data 18 by a millimeter-wave radio in the 60 GHz band. For example, the transmission unit 17 transmits the transmission data 18 using a transmission channel within a frequency range from 57 GHz to 66 GHz. Each of the format conversion unit 13, the pixel separation unit 14, and the compression unit 15 includes a plurality of modes including unprocessed (bypass processing), and can change each mode for each predetermined unit of image. The transmission data 18 includes reduction rate information 19 in the format conversion unit 13, pixel separation type and pixel separation phase information 20 in the pixel separation unit 14, and compression mode information in the compression unit 15.

  FIG. 16 is a block diagram of an image receiving apparatus according to this embodiment. The image receiving apparatus shown in FIG. 16 receives the transmission data 21 including the compressed image data compressed by the image transmitting apparatus shown in FIG. 15, and decompresses the compressed image data. The transmission data 21 is, for example, a 60 GHz band millimeter-wave radio signal modulated by OFDM. The transmission data 21 is transmitted using a transmission channel within a frequency range from 57 GHz to 66 GHz, for example. The receiving unit 22 receives the transmission data 21. The receiving unit 22 performs demodulation processing on the received transmission data 21. The depacketizer 23 extracts image compression data, reduction rate information 29, pixel separation type and pixel separation phase information 30 from the demodulated transmission data 21. The reduction ratio information 29, the pixel separation type, and the pixel separation phase information 30 are designated for each unit of compressed image data. The decompressing unit 24 performs decompression processing on the received image compressed data for each image compressed data in a predetermined unit. In addition to the reduction ratio information 29 described above, the decompression unit 24 stores information on the color space of the input image, which will be described later, information on the width of the image frame, and information indicating the validity / invalidity of pixel separation for each unit of compressed image data. To perform decompression processing. The pixel combination unit 25 performs pixel combination processing of the expanded image using the pixel separation type and pixel separation phase information 30. The format conversion unit 26 uses the reduction ratio information 29 to perform the horizontal enlargement process for each signal component of the pixel-combined image. The format conversion unit 13 may further perform color space conversion (for example, conversion from YCbCr to RGB). The image output unit 27 outputs the format-converted image as a reproduction image 28 in line units.

  FIG. 17 is a diagram illustrating an example of a transmission data structure according to the fifth embodiment. An image of a predetermined unit is compressed independently for each separated pixel phase. Here, the compressed data compressed independently is called a slice. Each slice is divided into a plurality of blocks having a fixed block size (for example, 8 × 8 pixels) and compressed in units of blocks. A slice header having a fixed bit length is given to each slice, and a combination of the slice header and the slice is called a compressed slice data unit. The compressed slice data unit is divided into a plurality of data by the packetizing unit 16, and a header is added to each of the divided data and packetized.

  FIG. 18 is a diagram illustrating an example of a slice header according to the fifth embodiment. The slice header includes a 2-bit partition_type indicating the pixel separation type and a 2-bit h_subsampling data field indicating the horizontal reduction ratio.

  FIG. 19 is a diagram for explaining pixel separation type information partition_type according to the fifth embodiment. partition_type = 0 indicates a pixel separation type in which an image line is divided into left and right parts in the horizontal direction. In addition, when Partition Enable (to be described later) is 0 (pixel separation off), partition_type is always set to 0, meaning that there is no pixel separation. FIG. 21 shows an example of horizontal left and right division. In this horizontal left / right division, eight continuous lines are set as an image of a predetermined unit, and the left half of the image frame is divided into two regions with partition 0 (pixel separation phase 1) and right half as partition 1 (pixel separation phase 2). The compression is performed independently for each separated partition (pixel phase). partition_type = 1 indicates a pixel separation type in which pixels are separated into even and odd pixels in the horizontal direction. FIG. 22 shows an example of horizontal even odd pixel separation. In this horizontal even-numbered odd pixel separation, 8 consecutive lines are defined as an image of a predetermined unit, the even-numbered pixels in the horizontal direction are separated as partition 0 (pixel separation phase 1), and the odd-numbered pixels are separated as partition 1 (pixel separation phase 2). Thus, compression is performed independently for each separated partition (pixel phase). partition_type = 2 indicates a case where pixels are separated in a checkerboard shape. FIG. 23 shows an example of checkerboard pixel separation. In this checkerboard pixel separation, eight consecutive lines are used as an image of a predetermined unit, even-numbered pixels of even lines and odd-numbered pixels of odd-numbered lines are partitioned 0 (pixel separation phase 1), and odd-numbered pixels of even-numbered lines. The even-numbered pixels of the odd lines are separated in the horizontal direction as partition 1 (pixel separation phase 2), and compression is performed independently for each separated partition (pixel phase).

  In image compression in a predetermined line number unit (here, 8 lines), in order to perform pixel separation in the line direction (vertical direction), the number of vertical lines of an image in a predetermined unit before image separation is doubled (here, 16 lines). Line). When the number of vertical lines is doubled, the compression delay and the amount of image memory are increased. When the number of vertical lines is doubled, the amount of delay and the amount of image memory for scan-converting an image in a predetermined unit into an output image in a line unit also increase in image expansion. On the other hand, if the pixel separation is limited to the horizontal direction, the pixels can be separated into two phases while maintaining the number of lines of the image in a predetermined unit, so that an increase in delay and an increase in image memory associated with compression processing and expansion processing are suppressed. It becomes possible.

  In addition, when Partition Enable (to be described later) is 0, that is, when the pixel separation is OFF, overhead such as slice header and packetization can be reduced by composing a slice by compressing continuous 8 lines as a predetermined unit. In addition, when Partition Enable is 1, that is, when pixel separation is on, horizontal left and right division that separates left and right without separating pixel units for every 8 consecutive lines, horizontal even-numbered odd pixel separation in pixel units, checkerboard pixel separation Any one of the pixel separation types is selected. Compressed transmission with less image quality degradation by selecting the pixel separation type so that the balance between error resilience and coding efficiency is optimal according to the transmission rate and error rate that change from moment to moment in cycles shorter than one frame Can be realized. In addition, regardless of which pixel separation type is selected, since the continuous eight lines are compressed as two slices, the packetizing unit 16 and the transmission unit 17 in the subsequent stage are not aware of the pixel separation type (pixel separation). Easy to operate (regardless of type).

  FIG. 20 is a diagram for explaining h_subsampling indicating the horizontal reduction ratio according to the fifth embodiment. h_subsampling = 0 indicates that compression has been performed without performing horizontal reduction. h_subsampling = 1 indicates that only the color difference signal has been reduced by 1/2 in the horizontal direction. h_subsampling = 2 indicates that all image signal components (for example, R, G, B, or Y, Cb, Cr, etc.) have been compressed by performing 1/2 reduction in the horizontal direction. On the receiving side, when h_subsampling = 1, the image subjected to the decompression process and the pixel combination is subjected to an enlargement process twice as large as the color difference signal. In addition, when h_subsampling = 2, the horizontal double enlargement process is performed on all the image signal components.

  Switch between no horizontal reduction, only color difference reduction, and all component reduction according to the transmission rate that changes from moment to moment in cycles shorter than one frame and the nature of the image for each predetermined unit (compression difficulty). As a result, high-quality image transmission in which the balance between distortion due to compression and resolution reduction due to reduction is optimized is possible.

FIG. 24 shows an identifier SliceIndex of the compressed slice. The slice identifier is included in the packet header or slice header in FIG. In a slice identifier, when a partition enable (to be described later) is 1 (pixel separation is on), the partition (pixel phase) number (that is, 1 or 0) after pixel separation is included in the lower 1 bit. Therefore, the partition (pixel phase) number of each slice can be extracted as follows.

  FIG. 25 is a diagram illustrating an example of transmission image format information according to the fifth embodiment. The transmission image format information includes color space information ColorSpace of the input image, information PartitionEnable indicating the validity / invalidity of pixel separation, and video frame size information VideoFrameInfo. The transmission image format information is transmitted together with the compressed image data, or is transmitted from the transmission side to the reception side when the transmission side and the reception side are connected.

  FIG. 26 is a diagram for explaining the color space information ColorSpace of the input image. ColorSpace 0, 1, and 2 indicate that the input image is an RGB image signal, a YCbCr422 image signal, and a YCbCr444 image signal, respectively.

  FIG. 27 is a diagram for explaining information PartitionEnable indicating the validity / invalidity of pixel separation. 0 and 1 of Partition Enable indicate pixel separation off (invalid) and pixel separation on (valid), respectively.

  FIG. 28 is a diagram illustrating a color space of compressed data according to the fifth embodiment. The color space of the compressed data indicates the color space of the image input to the compression unit 15 and the image output from the decompression unit 24. The color space information ColorSpace and the horizontal reduction rate of the input image (of the image transmission device) It is determined by h_subsampling indicating When the horizontal reduction processing is off (h_subsampling = 0) or all components 1/2 (h_subsampling = 2), the color space of the compressed data matches the color space of the input image. On the other hand, the color difference signal horizontal ½ (h_subsampling = 1) is applicable only when the color space of the input image is YCbCr444, and the color space of the compressed data is YCbCr422. According to FIG. 28, information on the color space of the compressed data is derived using the information on the color space of the input image and the information on the horizontal reduction ratio, and the compressed data is sent to the compression unit 15 of the transmission device and the expansion unit 24 of the reception device. The color space information is set for each slice.

The total number of blocks (in this case, 8 × 8 pixel block) NB of all components constituting the slice is calculated as follows.

  Here, NBF is a block number coefficient of the compressed data, and is determined by color space information ColorSpace of the input image and h_subsampling indicating the horizontal reduction ratio as shown in FIG. frame_width is the width of the image frame, and can be derived from the information about the image frame size VideoFrameSizeInfo. In the above equation, when pixel division is not performed (PartitionEnable = 0), one slice is composed of continuous 8-line images, and when pixel division is used (PartitionEnable = 1), two continuous 8-line images are used. This corresponds to the construction of a slice. However, a combination of pixel separation and horizontal reduction in which each component is not a multiple of 8 × 8 pixels is prohibited. That is, when the color space of the compressed data is RGB or YCbCr444, NB must be a multiple of 3. If the color space of the compressed data is YCbCr422, NB must be a multiple of 4. According to the above equation, the number of blocks of compressed data for each slice is derived using information on the color space of the input image, the width of the image frame, the presence / absence of pixel separation, and the information on the horizontal reduction ratio. The number of blocks of compressed data in each slice is set for each slice in the compression unit 15 of the transmission device and the expansion unit 24 of the reception device.

  FIG. 30 shows an example of the pixel combination unit 25. The image combining unit 25 receives the image signal 33 expanded by the expansion unit 24, information 30a regarding the pixel separation type and pixel separation phase from the depacket unit 23, and reception error information 30b regarding the compressed data of each phase. The The reception error information 30b is information based on reception error detection results in the reception unit 22 and the depacket unit 23. The selection unit 34 temporarily stores the image signal 33 in the image buffer (phase 1) 35 or the image buffer (phase 2) 38 according to the information 30a. The interpolation unit 36 generates an interpolation image of phase 2 based on the image signal of the buffer (phase 1) 35. The interpolation unit 37 generates a phase 1 interpolation image based on the image signal of the buffer (phase 2) 38. The selection unit 39 selects a pixel to be output to the combining unit 40 according to the reception error information 30b. The selection unit 39 outputs the output of the interpolation unit 36 or the interpolation unit 37 to the combining unit 40 regarding the phase in which the error has occurred. On the other hand, the selection unit 39 outputs the output of the image buffer (phase 1) 35 or the image buffer (phase 2) 38 to the combining unit 40 with respect to the phase where no error has occurred. The combination unit 40 combines the pixels output from the selection unit 39 to generate and output an image 41. As described above, the image of the predetermined unit is separated and compressed independently, and the pixel of the phase in which the reception error has occurred is interpolated and generated from the pixel of the phase that has been normally received. It is possible to reproduce without missing, and it is possible to realize image transmission that is robust against transmission errors.

  FIG. 31 shows the effect of pixel separation according to this embodiment. Pixel separation generally causes a reduction in compression efficiency with respect to a compression method that uses correlation between adjacent pixels such as DCT. For example, horizontal even and odd pixel separation doubles the distance between pixels in the horizontal direction, thus reducing the correlation in the horizontal direction but maintaining the correlation between pixels in the vertical direction. On the other hand, the checkerboard pixel separation maintains the diagonal pixel distance by doubling the vertical and horizontal pixel distances, respectively. In general, a natural image signal has high correlation between pixels in the vertical direction and the horizontal direction. In the horizontal left / right division, the continuity of the pixels is maintained, and although the compression efficiency is expected to be reduced due to the division into two slices in the horizontal direction, the performance is not greatly deteriorated as compared with the case without pixel separation.

  Therefore, the compression efficiency decreases in the order of horizontal horizontal division or no pixel separation, horizontal even-odd pixel separation, and checkerboard pixel separation. Further, when a reception error occurs in one of the two separated phases, an interpolation pixel is generated from a pixel having a phase that is normally received. As shown in FIG. 32A, the image signal that has been separated into even-numbered pixels in the horizontal direction maintains only a half band in the horizontal direction, and the interpolated image is blurred in the horizontal direction. On the other hand, as shown in FIG. 32B, the image signal subjected to checkerboard pixel separation can maintain horizontal and vertical bands that are important for subjective image quality. Therefore, the subjective image quality of the interpolated image is higher in the checkerboard pixel separation than in the horizontal even-odd pixel separation. In natural image signals, the diagonal frequency component of the image signal is often small due to the effects of compression and filtering, so checkerboard pixel separation is horizontal from the viewpoint of PSNR (Peak Signal-to-Noise Ratio). Image quality is higher than even and odd pixel separation. On the other hand, in the case of horizontal left / right division and no pixel separation, an image area of 8 lines constituting a slice is lost when a reception error occurs. If the missing pixel is interpolated from the upper and lower slices, either in subjective image quality or PSNR Will also cause a significant drop. That is, a trade-off relationship is established between the decrease in compression efficiency due to the pixel separation type and the error tolerance of the pixel separation type. Therefore, the transmission image quality can be improved by selecting an appropriate pixel separation type according to the error rate. Specifically, as shown in FIG. 31, when the error rate is low, the horizontal and horizontal division or no pixel separation gives the highest received image quality, but the error rate (ie, the necessity of error concealment) is high. As the value increases, the received image quality increases in the order of horizontal even odd pixel separation and checkerboard pixel separation. As described above, by appropriately selecting a pixel separation type in a predetermined unit such as a slice according to a transmission error rate that changes every moment with a period faster than one frame, as shown by a dotted line in FIG. The transmission image quality can be improved as compared with the fixed pixel separation type.

  FIG. 33 shows the effect of the reduction conversion according to the present embodiment. When the input image is an RGB signal and the transmission rate is sufficiently high, image quality deterioration can be avoided by performing compression with the RGB signal in order to avoid a conversion loss due to color space conversion from RGB to YCbCr. On the other hand, since there is generally a strong correlation between RGB signal components, the compression efficiency is improved and the compression distortion is reduced by converting to YCbCr and compressing, as compared with the case of compressing RGB as it is. As the transmission rate decreases, the loss of compression with RGB becomes larger than the conversion loss associated with the color space conversion from RGB to YCbCr, so compression with YCbCr becomes effective. Further, when the transmission rate further decreases, a trade-off between resolution and compression distortion occurs. Therefore, as the transmission rate decreases, compression is performed by reducing the number of samples in the order of the color difference signal and the luminance signal. Is effective. YCbCr 222 in FIG. 33 shows a case where each signal component of YCbCr 444 is compressed by being reduced in half in the vertical direction. YCbCr211 shows a case where each signal component of YCbCr422 is compressed by being reduced to 1/2 in the vertical direction. As described above, the color space and the reduction rate of the color difference signal or the luminance signal are adaptively selected according to the transmission rate that changes every moment with a period faster than one frame, and is indicated by the dotted line in FIG. As described above, it is possible to improve the transmission image quality as compared with compression in a fixed color space and the number of samples.

  Note that the present invention is not limited to the above-described embodiments as they are, and can be embodied by modifying the components without departing from the scope of the invention in the implementation stage. Various inventions can be formed by appropriately combining a plurality of constituent elements disclosed in the above embodiments. Further, for example, a configuration in which some components are deleted from all the components shown in each embodiment is also conceivable. Furthermore, you may combine suitably the component described in different embodiment.

  For example, it is also possible to provide a program that realizes the processing of each embodiment described above by storing it in a computer-readable storage medium. The storage medium may be a computer-readable storage medium such as a magnetic disk, optical disk (CD-ROM, CD-R, DVD, etc.), magneto-optical disk (MO, etc.), semiconductor memory, etc. For example, the storage format may be any form.

  Moreover, the program for realizing the processing of each of the above embodiments may be stored on a computer (server) connected to a network such as the Internet and downloaded to the computer (client) via the network.

DESCRIPTION OF SYMBOLS 11 ... Input image 12 ... Image input part 13 ... Format conversion part 14 ... Pixel separation part 15 ... Compression part 16 ... Packetization part 17 ... Transmission part 18,21. ..Transmission data 22 ... Reception unit 23 ... Depacketization unit 24 ... Expansion unit 25 ... Pixel combination unit 26 ... Format conversion unit 27 ... Image output unit 28 ... Reproduced image 19, 29 ... Reduction rate information 20, 30 ... Pixel separation information 30a ... Information about pixel separation type and pixel separation phase 30b ... Reception error information 33 ... Decompressed image signal 34 ... Selection unit 35 ... Image buffer (phase 1)
36, 37 ... interpolation unit 38 ... image buffer (phase 2)
DESCRIPTION OF SYMBOLS 39 ... Selection part 40 ... Combining part 41 ... Image 100 ... Image transmission apparatus 101 ... Dividing part 102 ... Video processing part 103 ... Output part 104 ... Input image 105 ... Transmission data 201 ... First video processing unit 202 ... Second video processing unit 203 ... Third video processing unit 204 ... Fourth video processing unit 205 ... Predetermined Unit image 206 ... Transmission speed 207 ... Transmission error rate 208 ... Video data 300 ... Image receiving device 301 ... Input unit 302 ... Video data extraction unit 303 ... Decompression unit 304・ ・ ・ Reverse subsample unit 305 ・ ・ ・ Display area determination unit 306 ・ ・ ・ Display order determination unit 307 ・ ・ ・ Output video 400 ・ ・ ・ Image transmission device 401 ・ ・ ・ Block division unit 402 ・ ・ ・ Compression unit 403 ... Output section 404 ... Image in a predetermined unit 405 ... Predetermined size 406 ... Transmission data 500 ... Image receiving device 501 ... Input unit 502 ... Separation unit 503 ... Decompression unit 504 ... Output Image 600 ... Image transmission device 601 ... Block division unit 603 ... Output unit 604 ... Image in a predetermined unit 605 ... Predetermined size 606 ... Transmission data 610 ... Compression unit 611 ... First compression unit 612 ... second compression unit 613 ... third compression unit 700 ... image receiving device 701 ... input unit 702 ... separation unit 704 ... output image 710 ... Expansion unit 711 ... First extension unit 712 ... Second extension unit 713 ... Third extension unit 800 ... Image transmission device 810 ... Compression unit 900 ... Image Receiving device 910. Part

Claims (8)

  1. A dividing unit for dividing the input image frame into slice units;
    A packetizing unit for generating a packetized data signal, wherein the packetized data signal is: (1) data of the divided input image frame;
    (2) Color space information indicating which color space of RGB, YCbCr 422 and YCbCr 444 is the color space of the input image frame;
    (3) Separation information indicating whether or not the slice is one of images obtained by separating a predetermined unit image of the input image frame into two partitions;
    (4) Frame number information indicating a frame number to which the slice belongs,
    (5) Display position coordinate information indicating a spatial position where the slice is to be displayed in the frame;
    (6) Partition number information indicating which of the two partitions the slice is;
    Including a packetization unit,
    A transmitter for OFDM-modulating the packetized data signal and transmitting the data signal through a wireless transmission channel in a frequency range from 57 GHz to 66 GHz;
    An image transmitting apparatus comprising:
  2. A compression unit that compresses the divided input image frame to generate compressed image data;
    The packetizing unit as data of the divided input image frame, the compressed image data packetizing, said packetized data signals,
    (7) Horizontal subsampling rate information indicating a horizontal subsampling rate of each signal component of the predetermined unit image,
    (A) No horizontal reduction is performed,
    (B) Only the color difference signal of each signal component is reduced in half in the horizontal direction.
    (C) that all signal components have been reduced by half in the horizontal direction;
    Horizontal sub-sampling rate information indicating either
    The image transmission device according to claim 1, further comprising:
  3. The packetizer further comprises:
    (8) The slice is
    (A) a first separation mode for separating the image of the predetermined unit into even-numbered pixels and odd-numbered pixels;
    (B) a second separation mode for separating the image of the predetermined unit into a first phase pixel and a second phase pixel of the checkerboard pattern; and
    (C) a third separation mode for separating the predetermined unit image into a left half image and a right half image;
    Separation type information indicating which partition mode the partition is based on,
    Packetizes as the packetized data signals, an image transmission apparatus according to claim 1 or 2.
  4. The packetizer further comprises:
    (9) frame size information of the input image frame is packetized as the packetized data signals, an image transmission apparatus according to any one of claims 1 to 3.
  5. The packetizer further comprises:
    (8) The slice is
    (A) a first separation mode for separating the image of the predetermined unit into even-numbered pixels and odd-numbered pixels;
    (B) a second separation mode for separating the image of the predetermined unit into a first phase pixel and a second phase pixel of the checkerboard pattern; and
    (C) a third separation mode for separating the predetermined unit image into a left half image and a right half image;
    Separation type information indicating which partition mode the partition is based on,
    Packetized as the packetized data signal,
    The dividing unit divides the input image frame in any of the first to third separation modes,
    The image transmission device according to claim 2 , wherein the compression unit generates the compressed image data by performing independent compression for each divided area.
  6. A receiver that receives a plurality of packet data by OFDM demodulating a signal received on a wireless transmission channel within a frequency range of 57 GHz to 66 GHz;
    A depacketization unit for depacketizing the plurality of packet data, from the depacketized packet data,
    (1) Image data compressed in units of slices,
    (2) Color space information indicating which color space of RGB, YCbCr 422 and YCbCr 444 is the color space of the frame of the image data;
    (3) Separation information indicating whether or not the slice is an image obtained by separating an image of a predetermined unit of the frame into two partitions;
    (4) Frame number information indicating the number of the frame to which the slice belongs,
    (5) display position coordinate information indicating a spatial position where the slice is to be displayed in the frame; and
    (6) Horizontal sub-sampling rate information indicating a horizontal sub-sampling rate of each signal component of the image in the predetermined unit, (A) that no horizontal reduction is performed, and (B) of each signal component Horizontal sub-sampling rate information indicating that only the color difference signal has been reduced by 1/2 in the horizontal direction, or (C) all of each signal component has been reduced by 1/2 in the horizontal direction.
    A depacketizer to extract
    The information of (1) to (6) is acquired, and an image obtained by decoding the image data is arranged at a position indicated by the display position information in a frame indicated by the frame number information, and the signal of the image A reconfiguration unit for upsampling a component based on the horizontal sub-sampling rate information;
    An image receiving apparatus comprising:
  7. The depacketization unit further includes (7) frame size information of the frame from the depacketized packet data,
    Depacketize and extract
    The reconstruction unit further acquires the information of (7), expands the image data for each of the compressed image data according to the information of (1) to (7), and reconstructs the image of the predetermined unit. Configure,
    The image receiving device according to claim 6.
  8. The depacketizing unit further comprises (8) the slice from the depacketized packet data,
    (A) a separation mode for separating the image of the predetermined unit into even-numbered pixels and odd-numbered pixels;
    (B) a separation mode for separating the image of the predetermined unit into a first phase pixel and a second phase pixel of the checkerboard pattern; and
    (C) a separation mode for separating the predetermined unit image into a left half image and a right half image;
    Separation type information indicating which partition mode the partition is based on,
    Extract
    The image receiving device according to claim 6, wherein the reconstruction unit arranges the image based on the separation type information.
JP2011503294A 2009-09-02 2009-09-28 Image transmitting apparatus and image receiving apparatus Active JP5726724B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JPPCT/JP2009/065356 2009-09-02
PCT/JP2009/065356 WO2011027440A1 (en) 2009-09-02 2009-09-02 Image compression device and image decompression device
JP2011503294A JP5726724B2 (en) 2009-09-02 2009-09-28 Image transmitting apparatus and image receiving apparatus
PCT/JP2009/066832 WO2011027479A1 (en) 2009-09-02 2009-09-28 Image transmitting device and image receiving device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2011503294A JP5726724B2 (en) 2009-09-02 2009-09-28 Image transmitting apparatus and image receiving apparatus

Publications (2)

Publication Number Publication Date
JPWO2011027479A1 JPWO2011027479A1 (en) 2013-01-31
JP5726724B2 true JP5726724B2 (en) 2015-06-03

Family

ID=53437969

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2011503294A Active JP5726724B2 (en) 2009-09-02 2009-09-28 Image transmitting apparatus and image receiving apparatus

Country Status (1)

Country Link
JP (1) JP5726724B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101860430B1 (en) * 2017-01-31 2018-05-23 (주)포앤비 Video transmission system and divided transmission of video frame and restoration method using the same

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01245686A (en) * 1988-03-28 1989-09-29 Canon Inc Picture receiver
JPH02312476A (en) * 1989-05-29 1990-12-27 Mitsubishi Electric Corp Picture coding transmission system
JPH03143086A (en) * 1989-10-27 1991-06-18 Sony Corp Decoder
JPH05304663A (en) * 1992-04-27 1993-11-16 Mitsubishi Electric Corp Picture encoding device
JPH0622288A (en) * 1992-04-07 1994-01-28 Sony Broadcast & Commun Ltd Apparatus and method for processing of image data
JPH07123132A (en) * 1993-08-31 1995-05-12 Canon Inc Method and device for communication
JPH07274164A (en) * 1994-03-29 1995-10-20 Sony Corp Image compression encoding device and image compression decoding device
JPH08307699A (en) * 1995-05-12 1996-11-22 Kokusai Electric Co Ltd Image processing method
JPH0937241A (en) * 1995-07-14 1997-02-07 Nec Eng Ltd High efficiency image coding transmission system, its encoder and decoder
JPH09233467A (en) * 1996-02-21 1997-09-05 Fujitsu Ltd Image data communication equipment and communication data amount control method for image data communication system
JPH10257492A (en) * 1997-03-12 1998-09-25 Matsushita Electric Ind Co Ltd Image coding method and image decoding method
JPH1118086A (en) * 1997-06-20 1999-01-22 Nippon Telegr & Teleph Corp <Ntt> Image communication method and system
JP2001160969A (en) * 1999-12-01 2001-06-12 Matsushita Electric Ind Co Ltd Moving picture encoding device, moving picture transmitter and moving picture recorder
JP2001224044A (en) * 2000-02-14 2001-08-17 Sony Corp Information processing unit and its method
JP2002027458A (en) * 2000-07-03 2002-01-25 Matsushita Electric Ind Co Ltd Method and system for encoding/decoding image
JP2003047024A (en) * 2001-05-14 2003-02-14 Nikon Corp Image compression apparatus and image compression program
JP2004040517A (en) * 2002-07-04 2004-02-05 Hitachi Ltd Portable terminal and image distribution system
JP2007151134A (en) * 2006-12-01 2007-06-14 Sharp Corp Image compression apparatus and image decompression apparatus, and computer-readable recording medium recorded with program for making computer run image compression method and image decompression method

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01245686A (en) * 1988-03-28 1989-09-29 Canon Inc Picture receiver
JPH02312476A (en) * 1989-05-29 1990-12-27 Mitsubishi Electric Corp Picture coding transmission system
JPH03143086A (en) * 1989-10-27 1991-06-18 Sony Corp Decoder
JPH0622288A (en) * 1992-04-07 1994-01-28 Sony Broadcast & Commun Ltd Apparatus and method for processing of image data
JPH05304663A (en) * 1992-04-27 1993-11-16 Mitsubishi Electric Corp Picture encoding device
JPH07123132A (en) * 1993-08-31 1995-05-12 Canon Inc Method and device for communication
JPH07274164A (en) * 1994-03-29 1995-10-20 Sony Corp Image compression encoding device and image compression decoding device
JPH08307699A (en) * 1995-05-12 1996-11-22 Kokusai Electric Co Ltd Image processing method
JPH0937241A (en) * 1995-07-14 1997-02-07 Nec Eng Ltd High efficiency image coding transmission system, its encoder and decoder
JPH09233467A (en) * 1996-02-21 1997-09-05 Fujitsu Ltd Image data communication equipment and communication data amount control method for image data communication system
JPH10257492A (en) * 1997-03-12 1998-09-25 Matsushita Electric Ind Co Ltd Image coding method and image decoding method
JPH1118086A (en) * 1997-06-20 1999-01-22 Nippon Telegr & Teleph Corp <Ntt> Image communication method and system
JP2001160969A (en) * 1999-12-01 2001-06-12 Matsushita Electric Ind Co Ltd Moving picture encoding device, moving picture transmitter and moving picture recorder
JP2001224044A (en) * 2000-02-14 2001-08-17 Sony Corp Information processing unit and its method
JP2002027458A (en) * 2000-07-03 2002-01-25 Matsushita Electric Ind Co Ltd Method and system for encoding/decoding image
JP2003047024A (en) * 2001-05-14 2003-02-14 Nikon Corp Image compression apparatus and image compression program
JP2004040517A (en) * 2002-07-04 2004-02-05 Hitachi Ltd Portable terminal and image distribution system
JP2007151134A (en) * 2006-12-01 2007-06-14 Sharp Corp Image compression apparatus and image decompression apparatus, and computer-readable recording medium recorded with program for making computer run image compression method and image decompression method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101860430B1 (en) * 2017-01-31 2018-05-23 (주)포앤비 Video transmission system and divided transmission of video frame and restoration method using the same

Also Published As

Publication number Publication date
JPWO2011027479A1 (en) 2013-01-31

Similar Documents

Publication Publication Date Title
EP2651129B1 (en) Image processing device and image processing method
US9467696B2 (en) Dynamic streaming plural lattice video coding representations of video
KR101913993B1 (en) Multi-view signal codec
EP0394332B1 (en) SYMBOL CODE GENERATION PROCEEDING FROM INTERFRAME DPCM OF TDM&#39;d SPATIAL-FREQUENCY ANALYSES OF VIDEO SIGNALS
US8977048B2 (en) Method medium system encoding and/or decoding an image using image slices
US6459454B1 (en) Systems for adaptively deinterlacing video on a per pixel basis
JP2791822B2 (en) Digital processing method and apparatus for high definition television signal
US7010044B2 (en) Intra 4×4 modes 3, 7 and 8 availability determination intra estimation and compensation
US6952500B2 (en) Method and apparatus for visual lossless image syntactic encoding
CA2044118C (en) Adaptive motion compensation for digital television
CN101889447B (en) Extension of the AVC standard to encode high resolution digital still pictures in series with video
JP2012170122A (en) Method for encoding image, and image coder
ES2546091T3 (en) Processing of a video program that has plural processed representations of a single video signal for reconstruction and broadcast
US20160100194A1 (en) Method and apparatus for encoding image data using wavelet signatures
JP4773966B2 (en) Video compression method and apparatus
US8923403B2 (en) Dual-layer frame-compatible full-resolution stereoscopic 3D video delivery
EP2156668B1 (en) Method and apparatus for generating block-based stereoscopic image format and method and apparatus for reconstructing stereoscopic images from block-based stereoscopic image format
US8130836B2 (en) Multi-view stereo imaging system and compression/decompression method applied thereto
US7295614B1 (en) Methods and apparatus for encoding a video signal
KR100513211B1 (en) Signal transmission apparatus, method therefor and signal receiving apparatus
CN1098598C (en) Method and apparatus for providing compressed non-interlaced scanned video signal
KR100913088B1 (en) Method and apparatus for encoding/decoding video signal using prediction information of intra-mode macro blocks of base layer
JP4991699B2 (en) Scalable encoding and decoding methods for video signals
US9369759B2 (en) Method and system for progressive rate adaptation for uncompressed video communication in wireless systems
US8254458B2 (en) Moving picture encoding apparatus and method, moving picture decoding apparatus and method

Legal Events

Date Code Title Description
RD02 Notification of acceptance of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7422

Effective date: 20130730

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20130910

RD07 Notification of extinguishment of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7427

Effective date: 20140319

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20140527

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20140724

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20150303

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20150401

R151 Written notification of patent or utility model registration

Ref document number: 5726724

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R151