WO2015118664A1 - Image transmission device, image reception device, and surveillance camera system, teleconference system, and vehicle-mounted camera system using same - Google Patents

Image transmission device, image reception device, and surveillance camera system, teleconference system, and vehicle-mounted camera system using same Download PDF

Info

Publication number
WO2015118664A1
WO2015118664A1 PCT/JP2014/052925 JP2014052925W WO2015118664A1 WO 2015118664 A1 WO2015118664 A1 WO 2015118664A1 JP 2014052925 W JP2014052925 W JP 2014052925W WO 2015118664 A1 WO2015118664 A1 WO 2015118664A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image data
unit
processing unit
compression
Prior art date
Application number
PCT/JP2014/052925
Other languages
French (fr)
Japanese (ja)
Inventor
稲田 圭介
甲 展明
Original Assignee
日立マクセル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日立マクセル株式会社 filed Critical 日立マクセル株式会社
Priority to PCT/JP2014/052925 priority Critical patent/WO2015118664A1/en
Publication of WO2015118664A1 publication Critical patent/WO2015118664A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/2365Multiplexing of several video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/109Selection of coding mode or of prediction mode among a plurality of temporal predictive coding modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/115Selection of the code volume for a coding unit prior to coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle

Definitions

  • the technical field of the present invention relates to video information transmission / reception technology and a system using the technology.
  • DisplayPort VESA registered trademark
  • HDMI High-Definition Multimedia Interface (HDMI Licensing, LLC registered trademark)
  • VESA Video Electronics Standards Association
  • Ethernet registered trademark
  • AVB Analog Video Bridge
  • Patent Document 1 describes "a video transmission apparatus 2 that compresses and encodes a video signal from the camera 1 and sends it out.”
  • a video transmission system and a video transmission control method including an operation terminal 3 for decoding a compression-encoded video signal received from a video transmission device 2 and displaying it on the video display unit 11 and controlling the camera 1,
  • the apparatus 2 receives a camera control signal from the low delay mode encoding unit 7, the high quality mode encoding unit 8, the mode selection unit 6, and the operation terminal 3 that compress and encode the video signal from the camera 1.
  • a control signal processing unit 9 that controls the mode selection unit 6 to select the low-delay mode encoding unit 7 and controls the imaging direction or imaging range of the camera 1 as a low-delay mode, 1 of display by reducing the delay time on the video display unit 11 'has been described that (see FIG. 1).
  • the amount of delay can be set according to the so-called two types of compression, compression that achieves high image quality and compression that achieves low delay, and the required amount of delay.
  • both methods have a problem in that when the compression is performed with a low delay, the image quality is deteriorated as compared with the case where the delay amount is large.
  • the present invention has been achieved in view of the above-described prior art, and an object thereof is an image transmission apparatus, an image transmission method, and an image transmission apparatus that can transmit image data with high image quality and low delay. Is to provide a surveillance camera system, a video conference system, and an in-vehicle camera system using these.
  • an image transmission device and an image reception device described in the following claims are provided. More specifically, an image input unit to which image data is input, a compression processing unit that generates compressed image data from an image input to the image input unit, and compressed image data generated by the compression processing unit A data transfer unit for outputting to a transmission line, the compression processing unit generates a plurality of compressed image data having different compression delay times, and the data transfer unit is configured to store the compression delay time generated by the compression processing unit.
  • An image transmission apparatus that multiplexes and outputs a plurality of different compressed image data, receives a plurality of compressed image data having different compression delay times, and extracts each compressed image data; and the data A plurality of compressed image data having different compression delay times received by the receiving unit are decompressed to generate a plurality of image data, and additional image data is generated.
  • An image superimposing unit that superimposes a plurality of image data, and the image superimposing unit adds the additional image data from the expansion processing unit to a part of the image data generated by the expansion processing unit.
  • an image receiving apparatus that superimposes and outputs other image data generated by the decompression processing unit is provided.
  • a surveillance camera system in order to achieve the above-described object, a surveillance camera system, a video conference system, and an in-vehicle camera system are provided as systems using the image transmission device and the image reception device.
  • an image transmission apparatus and an image receiving apparatus capable of transmitting image data with high image quality and low delay, and further, using these, a surveillance camera system that is practically excellent.
  • a video conference system and an in-vehicle camera system are provided.
  • FIG. 1 is a block diagram showing an image transmission system according to the present embodiment, and the image transmission system has a configuration in which an image transmission apparatus 1 and an image reception apparatus 2 are connected by a cable 3 as is apparent from the figure. .
  • the image transmission device 1 is an image transmission device that compresses and transmits image data. More specifically, the image transmission device 1 receives image data that has been decoded so that the digital broadcast can be received and viewed, or image data that has been captured by a camera, This is a so-called image recording / playback device that outputs image data to another device via an HDMI cable or a LAN.
  • Examples of the image transmission apparatus 1 include a recorder, a digital TV with a built-in recorder function, a personal computer with a built-in recorder function, a mobile phone with a camera function and a recorder function, a camcorder, an in-vehicle camera, and the like.
  • the image receiving device 2 is a display device that inputs image data and outputs an image to a monitor using an HDMI cable, a LAN, or the like.
  • Examples of the image receiving device 2 include a digital TV, a display, a projector, a mobile phone, a signage device, an in-vehicle peripheral monitoring device, and the like.
  • the cable 1000 is a data transmission path for performing data communication such as image data between the devices of the image transmission device 1 and the image reception device 2.
  • data communication such as image data between the devices of the image transmission device 1 and the image reception device 2.
  • the cable 3 there is a wired cable compatible with the HDMI standard, the DisplayPort standard, and the Ethernet, or a data transmission path for performing wireless data communication.
  • the image input unit 10 is an input unit for inputting image data to the image transmission apparatus 1.
  • image data input to the input unit 10 there is digital image data input from a monitoring camera or an in-vehicle camera.
  • the image processing unit 11 performs digital image processing on the image data from the image input unit 10 and outputs the image data after the digital image processing to the compression processing unit 12.
  • Examples of digital image processing include rotation, enlargement or reduction processing, frame rate conversion processing, edge extraction, motion vector extraction, high frequency component removal, noise removal, and the like.
  • the compression processing unit 12 performs compression processing on the image data from the image input unit 10 and the image data from the image processing unit 11 and outputs the compressed data to the data transfer unit 13.
  • the data transfer unit 13 converts the two types of compressed image data (compressed image data A and compressed image data B) compressed by the compression processing unit 12 into signals in a format suitable for cable transmission and outputs the signals to the cable 3.
  • An example of a signal in a format suitable for cable transmission is described in the HDMI standard.
  • image data adopts a TMDS data transmission format.
  • Another example of a signal in a format suitable for cable transmission is described in the IEEE P1722 standard used in in-vehicle Ethernet.
  • the compressed image data is transmitted according to AVTP (Audio Video Transport Protocol) Video Protocol.
  • the user IF unit 14 is an input unit for inputting a signal for controlling the operation of the image transmission apparatus 1.
  • An example of the user IF unit 14 is a remote control receiving unit.
  • the control signal from the user IF unit 14 is output to the control unit 15.
  • the control unit 15 controls the entire image transmission apparatus 1 in accordance with the signal from the user IF unit 14.
  • An example of the control unit 15 is a microprocessor. Image data from the image transmission apparatus 1 is supplied to the image reception apparatus 2 via the cable 3.
  • Data receiving unit 20 receives a signal in a format suitable for cable transmission.
  • the signal input to the data receiving unit 20 is converted into a predetermined digital data from a signal in a format suitable for cable transmission, and two types of compressed image data (compressed image data A and compressed image data B) are processed. ) Are extracted and output to the decompression processing unit 21.
  • the decompression processing unit 21 decompresses the compression processing performed by the compression processing unit 12 in the image transmission apparatus 1 to generate two types of image data (decoded image data A and decoded image data B). Also, the decoded image data A is output to the image superimposing unit 23, and the decoded image data B is output to the additional image generating unit 22.
  • the additional image generation unit 22 extracts image features from the input image data B, generates additional image data based on the result, and outputs the additional image data to the image superimposing unit 23.
  • image feature extraction include object detection, moving object detection, state detection, edge extraction, extraction of pixels having luminance within a predetermined threshold range, and the like.
  • object detection include vehicle detection, dive object detection, lane detection, road surface detection, and object detection such as traffic lights in an in-vehicle periphery monitoring system.
  • object detection include vehicle detection, dive object detection, lane detection, road surface detection, and object detection such as traffic lights in an in-vehicle periphery monitoring system.
  • object detection is suspicious person detection in a monitoring system.
  • state detection include red signal detection, road surface state detection, weather detection, and retrograde detection.
  • the additional image generation unit 22 may output the input image data B as it is or may output image feature extraction information.
  • the image superimposing unit 23 generates image data in which two types of image data input from the decompressing unit 21 and the additional image generating unit 22 are superimposed, and outputs the generated image data to the display unit 24.
  • the image superimposing unit 23 includes a memory unit and a memory control unit for temporarily storing images. During the period until the decoded image data A is input to the image superimposing unit 23, the additional image data is temporarily stored in the memory unit, and the decoded image data A and the additional image data are superimposed from the memory unit at a timing of superimposing processing. The additional image data is read and a superimposition process is performed. Further, the image superimposing unit 23 may perform the image quality improvement processing of the decoded image data A using the decoded image data B or the additional image data. As an example of image quality enhancement processing, there is a method of performing edge enhancement processing of decoded image data A using decoded image data B or additional image data including edge detection information.
  • the display unit 24 converts the input image data into a signal suitable for the display method and displays it on the screen.
  • Examples of the display unit 24 include a display unit such as a liquid crystal display, a plasma display, an organic EL (Electro-Luminescence) display, and a projector projection display.
  • the user IF unit 25 is an input unit for inputting a signal for controlling the operation of the image receiving device 2.
  • An example of the user IF unit 25 is a remote control receiving unit.
  • a control signal from the user IF unit 25 is supplied to the control unit 26.
  • the control unit 26 is a control unit that controls the entire image receiving apparatus 2 in accordance with a signal from the user IF unit 25.
  • FIG. 2 is a diagram showing an effective area in which image data in one frame period is transmitted and a blanking period in which image data is not transmitted.
  • a region indicated by reference numeral 400 is a vertical period, and the vertical period 400 includes a vertical blanking period 401 and a vertical effective period 402.
  • the VSYNC signal is a 1-bit signal in which 1 is set between the number of lines defined from the top of the vertical blanking period 401 and 0 is set between the other vertical blanking periods and the vertical effective period 402.
  • An example of the prescribed number of lines is 4 lines.
  • An area indicated by a reference numeral 403 is a horizontal period, and the horizontal period 403 includes a horizontal blanking period 404 and a horizontal effective period 405.
  • the HSYNC signal is a 1-bit signal in which 1 is set between the number of pixels defined from the head of the horizontal blanking period 404 and 0 is set between the other horizontal blanking periods and the horizontal effective period 405.
  • An example of the prescribed number of pixels is 40 pixels.
  • the effective period 406 is an area surrounded by a vertical effective period 402 and a horizontal effective period 405, and image data is allocated to this period.
  • the blanking period 407 is an area surrounded by a vertical blanking period 401 and a horizontal blanking period 404.
  • the compressed image data and the sub-compression code information are transmitted during the effective period 406, and the main compression code information is transmitted during the blanking period 407.
  • the blanking period 407 data obtained by packetizing audio data and other attached data is transmitted.
  • a method of sending a packet of voice data or the like as a reliable packet in the blanking period 407 is disclosed in, for example, Japanese translations of PCT publication No. 2005-514873.
  • the error correction code is included in the packet data in the blanking period, it is possible to correct an error occurring in the transmission path, and the error tolerance is increased. Further, the data for data transmission of the packet in the blanking period is transmitted to two physically different channels, and the channel to be transmitted is switched every certain time. For this reason, an error occurring in a burst manner in one channel is not affected by the other channel, so that a data error can be corrected.
  • the error correction rate has an improvement effect of 10 ⁇ 14 in the horizontal blanking period versus 10 ⁇ 9 in the horizontal effective period.
  • FIG. 3 is a diagram illustrating a detailed example of the image processing unit 11 described above.
  • the frame thinning unit 110 is a block for thinning out frames of input image data supplied from the image input unit 10.
  • image data of 15 fps is output by thinning out every other image for input image data input at 30 fps (frame / second).
  • the input image data may be output at the frame rate without performing the thinning.
  • the image feature extraction image generation unit 112 extracts features of the input image from the input image data, and generates image data obtained by simplifying or omitting image data other than the features.
  • the feature extraction processing for image data there are a first-order differential filter (edge detection, line detection) and a second-order differential detection (Laplacian filter).
  • An example of edge detection is a Sobel filter.
  • the frame thinning unit 110, the reduction unit 111, and the image feature extraction image generation unit 112 have been described as the components of the image processing unit 11, all the components may not be included.
  • the image processing unit 11 may be configured by only the reduction unit 111 or may be configured by only the image feature extraction image generation unit 112.
  • FIG. 4 is a diagram illustrating a detailed example of the compression processing unit 12.
  • a compression unit A that generates compressed image data A used for viewing and a compression unit B that generates compressed image data B used for recognition are provided.
  • the compression processing delay in the compression unit B is short.
  • the compression unit A performs high-quality compression processing on the image data supplied from the image input unit 10.
  • H.264 There is a compression process using H.264 B pictures.
  • B picture is a method of performing compression using past frames and future frames. This method is not suitable for low-delay transmission because a delay of one frame time or more occurs when performing compression processing because a future frame is used, but a high-quality decoded image can be obtained. This is a method suitable for viewing purposes.
  • the compression unit B performs low-delay compression processing on the image data supplied from the image processing unit 11.
  • H.264 There is a compression process using H.264 P pictures.
  • P picture is a method of performing compression using past frames. This method is suitable for recognition applications such as object detection because a future frame is not used and a compression delay associated with waiting for a future frame generated in the compression unit B does not occur.
  • the image data supplied from the image processing unit 11 is image data that has been subjected to digital image processing for reducing the amount of image information in the image processing unit 11. It is possible to greatly reduce the amount of generated codes after data compression. In other words, by assigning a larger amount of code to the feature-extracted image data, the compression unit A can be compressed with higher image quality.
  • the memory control unit 122 performs control for temporarily storing the compressed image data B supplied from the compression unit B in the memory unit 123.
  • this control there is a method of temporarily storing the compressed image data B processed in the valid period 406 in the memory unit 123 and outputting the compressed image data B in the subsequent blanking period 407.
  • FIGS. 5A to 5D show examples of input image data and output image data of the additional image generating unit 22 and the image superimposing unit 23.
  • FIG. 5A to 5D show examples of input image data and output image data of the additional image generating unit 22 and the image superimposing unit 23.
  • An image 50 shown in FIG. 5A shows an example in which image data (decoded image of the compressed image data A) supplied from the decompression processing unit 21 to the image superimposing unit 23 is displayed as an image. That is, in the image 50, two vehicles, a traveling vehicle 501 and a traveling vehicle 502, are drawn on the traveling road surface 500.
  • the traveling vehicle 501 is a vehicle that has been interrupted on the own vehicle traveling lane from the side, and the traveling vehicle 502 is a vehicle far ahead of the own vehicle.
  • a background 503 is also drawn.
  • An image 51 shown in FIG. 5B shows an example in which image data (decoded image of the compressed image data B) supplied from the decompression processing unit 21 to the additional image generation unit 22 is displayed as an image. That is, the compressed image data B is image data after edge detection by the image processing unit 11, and image information other than the edge information is simplified or removed.
  • a drawing 511 is an image after the edge of the traveling vehicle 501 is extracted.
  • a drawing 512 is an image after edge extraction of the traveling vehicle 502.
  • An image 52 shown in FIG. 5C is an example in which the image data supplied from the additional image generating unit 22 to the image superimposing unit 23 is displayed as an image.
  • the additional image generation unit 22 generates, based on the input image 51, a rectangular figure surrounding the detected vehicle that does not detect the vehicle.
  • the color of the rectangular figure may be changed according to the distance and direction from the vehicle. Moreover, it is good also as a photographer which changes in time, such as blinking.
  • a rectangular figure 521 shows a traveling vehicle 501.
  • Rectangular view 522 shows traveling vehicle 502.
  • An image 53 shown in FIG. 5D is an example in which image data supplied from the image superimposing unit 23 to the display unit 24 is displayed as an image.
  • FIG. 6 is a diagram illustrating a detailed example of the decompression processing unit 21.
  • the decompression unit A210 performs decompression processing on the compressed image data A to generate decoded image data A.
  • the decompression unit B211 performs decompression processing on the compressed image data B and generates decoded image data B.
  • FIG. 7 is a diagram showing an example of an image input and output timing chart in the present embodiment.
  • a signal 600 indicates a vertical blanking signal for the input images 604 and 605 of the image transmission apparatus 1.
  • a signal 610 indicates a vertical blanking signal for the output images 614, 615, and 617 of the image transmission apparatus 1.
  • a signal 620 indicates a vertical blanking signal for the input images 624, 625, and 627 of the image superimposing unit 23 of the image receiving device 2.
  • a signal 630 indicates a vertical blanking signal for the output image 631 of the image superimposing unit 23 of the image receiving device 2.
  • the vertical blanking signals 600, 610, 620, and 630 are composed of an effective period 601 and a blanking period 602.
  • the output image 614 is compressed image data B obtained by compressing the input image 604 by the compression unit B.
  • the output image 617 is compressed image data A obtained by compressing the input image 604 by the compression unit A.
  • the compression delay time 618 for the compressed image data B is shorter than the compression delay time 619 for the compressed image data A.
  • the compression unit A shows an example in which compression processing is performed using a future frame for one frame.
  • the input image 624 is decoded image data B obtained by decompressing the compressed image data 614 by the decompression unit B.
  • the image data 626 is an additional image generated after the recognition processing time 628 for the compressed image data 624 in the additional image generation unit 22.
  • the input image 627 is decoded image data A obtained by decompressing the input image 614 by the decompression unit A.
  • the output image 631 is image data after the image data 626 and the input image 627 are superimposed.
  • FIG. 8 is a diagram showing another example of an image input and output timing diagram in the present embodiment.
  • signals having the same numbers as those in FIG. 7 are the same as those in FIG.
  • the present embodiment is characterized in that the compressed image data B614 compressed with a low delay is transmitted in both the effective period and the blanking period, and by this method, the compressed image data B614 is made redundant.
  • the additional image generation unit 22 in the image receiving device 2 when there is an abnormality in the decoded image data B624, the transmission error resistance is improved by using the decoded image data B726.
  • FIG. 8 is a diagram showing another example of an image input and output timing diagram in the present embodiment.
  • this example is characterized in that the transmission of the compressed image data B (see reference numeral 614 in FIG. 7) in the effective period is eliminated in the image transmission apparatus 1 shown in FIG. is there. According to this method, only the compressed image data A is transmitted as the compressed image data during the effective period, so that the image quality of the decoded image data A can be further improved.
  • FIG. 10 is a diagram showing still another example of the image input and output timing chart in the present embodiment.
  • the compression unit A (see reference numeral 120 in FIG. 4) of the image transmission apparatus 1 displays frames up to two frames ahead. It is an example which enabled compression processing by using. With this method, it is possible to further improve the image quality of the decoded image data A.
  • the compression unit A may be configured to perform compression processing using frames up to N frames ahead. Also, in FIGS. 7, 8, and 9, the compression processing may be performed using frames up to N frames ahead.
  • FIG. 11 shows an example of a timing chart for each frame of the input image data, the compressed image data A, the compressed image data B, the additional image, and the finally displayed display image in the present embodiment.
  • the APVF Header of the APVF Video PDU format defined by the AVTP Video Protocol in the P1722 standard is shown.
  • Frames 1402 and 1404 are compressed image data for the input image data 1401.
  • a person 1409 exists in the frame 1401.
  • the compressed image data A (frame 1402) is compressed with a small delay.
  • the compressed image data B (frame 1404) is compressed with less compression deterioration with respect to the compressed image data A by using more time and wider range of frame information. As a result, the compressed image data B achieves higher image quality than the compressed image data A.
  • the additional image 1406 is a frame image that is drawn around the person in the frame by performing an expansion process and a person recognition process on the frame 1402.
  • the frame 1405 is an image displayed on the monitor.
  • Reference numeral 1407 denotes an image obtained by superimposing the decoded image (person) obtained by decompressing the compressed image data B (frame 1404) and the additional image 1406 (frame image).
  • compressed image data A and compressed image data B are generated from the input image and output from the image transmission apparatus.
  • the compressed image data A and the compressed image data B are decompressed with a time difference of one frame or more.
  • the image transmission apparatus generates an additional image 1406 based on the compressed image data B, and displays the additional image 1406 by superimposing it on the decoded image obtained by the decompression process on the compressed image data B.
  • the frcount values of the compressed image data A (frame 1401) and the compressed image data B (frame 1402) generated based on the same frame (frame 1401) of the input image are the same.
  • the image receiving apparatus can determine the compressed image data A (frame 1401) and the compressed image data B (frame 1402) for the same frame (frame 1401) of the input image.
  • the AVTP Video Protocol has been described as an example.
  • the present invention is not limited to this example.
  • the frame count information may be MPEG2-TS or H.264. It may be stored in the user data area of the H.264 / AVC NAL.
  • the image data transmitted by the image transmission apparatus is compressed and transmitted by two types of compression methods having different delay times. It is possible to achieve both high image quality and high recognition performance. Furthermore, by making the compressed image data compressed with a low delay redundant, it is possible to improve transmission error tolerance.
  • FIG. 12 shows a surveillance camera system
  • FIG. 13 shows a video conference system
  • FIG. 14 shows an example of an in-vehicle camera system.
  • reference numerals 1101, 1102, and 1103 are monitoring cameras installed at points A, B, and C, respectively
  • 1104 is a monitoring center that receives images captured by the monitoring cameras 1101, 1102, and 1103, and
  • Reference numeral 1105 denotes a wide area network (WAN) such as an Internet line. Images captured by the monitoring cameras 1101 to 1103 can be displayed on a monitor or the like in the monitoring center 1104 via the WAN 1105.
  • FIG. 11 shows an example in which there are three surveillance cameras, the number of in-vehicle cameras may be two or less, or four or more.
  • the image transmission apparatus is mounted on, for example, the monitoring cameras 1101 to 1103. That is, the image transmission apparatus performs a compression process, which will be described later, on the input image input via the lenses of the monitoring cameras 1101 to 1103, and the compressed input image is output to the WAN 1105.
  • the image receiving apparatus of the present embodiment is mounted on the monitoring center 1104, for example.
  • the monitoring center 1104 includes a display monitor. That is, the image receiving apparatus performs a decompression process, a recognition process, an additional image generation process, a superimposition process of the decompressed input image and the additional image, etc., which will be described later, on the compressed input image, and performs a superimposition process on the display monitor. Display the later image.
  • the image displayed on the monitor may be an image after decompression processing or an additional image.
  • reference numerals 1201, 1202, and 1203 are video conference apparatuses installed at points A, B, and C, respectively, and 1204 is a WAN such as an Internet line.
  • images captured by the cameras of the video conference apparatuses 1201 to 1203 can be displayed on the monitors of the video conference apparatuses 1201 to 1203 via the WAN 1204.
  • FIG. 12 shows an example in which there are three video conference systems. However, the number of video conference systems may be two, or four or more.
  • the image transmission apparatus is mounted on each camera of the video conference apparatuses 1201 to 1203, for example. That is, the image transmission apparatus performs compression processing, which will be described later, on the input image input via the camera lens of each of the video conference apparatuses 1201 to 1203, and outputs the compressed input image to the WAN 1205. Is done.
  • the image receiving apparatus according to the present embodiment is also installed in each of the video conference apparatuses 1201, 1202, and 1203, for example. That is, each of the video conference apparatuses 1201, 1202, and 1203 may include a display monitor by itself, or may be externally attached. In this case, the image receiving apparatus performs a compression process on an input image.
  • a decompression process, a recognition process, an additional image generation process, a decompressed input image and an additional image are superimposed, and the image after the superimposition process is displayed on the display monitor.
  • the image displayed on the monitor may be an image after expansion processing or an additional image.
  • reference numeral 1301 denotes an automobile
  • 1302 and 1303 denote in-vehicle cameras mounted on the automobile 1301
  • 1306 denotes decompression of compressed image data compressed by the in-vehicle camera.
  • ECU Electronic Control Unit
  • 1304 displays an image captured by the in-vehicle cameras 1302 and 1303, a video expanded by the ECU 1406, an OSD (On Screen Display) reflecting the recognition processing result, and the like
  • Reference numeral 1305 denotes a local area network (LAN) in the automobile 1301.
  • LAN local area network
  • Images captured by the in-vehicle cameras 1302 and 1303 can be displayed on the monitor 1304 after being subjected to image processing such as expansion processing, recognition processing, and OSD addition by the ECU 1306 via the LAN 1305.
  • FIG. 13 shows an example in which two in-vehicle cameras are mounted, however, the number of in-vehicle cameras may be one or three or more.
  • the image transmission apparatus of the present embodiment is mounted on, for example, in-vehicle cameras 1302 and 1303. That is, the image transmission apparatus performs a compression process, which will be described later, on the input image input via the lenses of the in-vehicle cameras 1302 and 1303, and the compressed input image is output to the LAN 1305.
  • the image receiving apparatus of the present embodiment is mounted on the ECU 1306, for example. Note that the image receiving apparatus may have both functions of the ECU 1306 and the monitor 1304.
  • the image receiving apparatus performs a decompression process, a recognition process, an additional image generation process, a decompressed input image and an additional image superimposing process, etc., which will be described later, on the compressed input image, and the processed image Is output to the monitor 1304.
  • this invention is not limited to the above-mentioned Example, Various modifications are included.
  • the above-described embodiments are described in detail for the entire system in order to explain the present invention in an easy-to-understand manner, and are not necessarily limited to those having all the configurations described.
  • a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment.
  • each of the above-described configurations, functions, processing units, processing means, and the like may be realized by hardware by designing a part or all of them with, for example, an integrated circuit.
  • Each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor.
  • Information such as programs, tables, and files that realize each function can be stored in a memory, a hard disk, a recording device such as an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.
  • control lines and information lines indicate what is considered necessary for the explanation, and not all the control lines and information lines on the product are necessarily shown. Actually, it may be considered that almost all the components are connected to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

Provided are an image transmission device and an image reception device that can be used in a system that compresses and transmits image data to be transmitted and that achieve high image quality and high recognition performance. The image transmission device comprises: an image input unit into which image data is input; a compression processing unit that generates compressed image data from the image that is input into the image input unit; and a data transfer unit that outputs the compressed image data that is generated by the compression processing unit to a transmission path. The compression processing unit generates a plurality of pieces of compressed image data that differ with respect to compression delay time. The data transfer unit multiplexes and outputs the plurality of pieces of compressed image data that are generated by the compression processing unit and that differ with respect to compression delay time. The image reception device comprises: a data reception unit that receives the multiplexed plurality of pieces of compressed image data that differ with respect to compression delay time and extracts each of the pieces of compressed image data; an expansion processing unit that expands each of the plurality of pieces of compressed image data that differ with respect to compression delay time and that are received by the data reception unit and generates a plurality of pieces of image data; an additional image generation unit that generates additional image data; and an image layering unit that layers a plurality of pieces of image data. The image layering unit layers additional image data from the expansion processing unit or other image data that is generated by the expansion processing unit on one part of the image data that is generated by the expansion processing unit and outputs the result.

Description

画像伝送装置と画像受信装置、並びに、これらを利用した監視カメラシステム、テレビ会議システム、そして、車載カメラシステムImage transmission device and image reception device, and surveillance camera system, video conference system, and in-vehicle camera system using them
 本発明の技術分野は、映像情報の送受信技術及びそれを利用したシステムに関する。 The technical field of the present invention relates to video information transmission / reception technology and a system using the technology.
 近年、デジタル画像の送受信における様々な分野において、高認識化、低遅延化、及び、高画質化の要求が、年々増加している。 In recent years, in various fields in digital image transmission / reception, demands for higher recognition, lower delay, and higher image quality are increasing year by year.
 従来、画像データを機器間で伝送する方式については、HDMI(High-Definition Multimedia Interface(HDMI Licensing,LLCの登録商標))規格やVESA(Video Electronics Standards Association)により策定されたDisplayPort(VESAの登録商標または商標)規格などがある。また、車載ネットワークとして、Ethernet(登録商標)AVB(Audio Video Bridge)が規格化されている。 Conventionally, with regard to the method of transmitting image data between devices, DisplayPort (VESA registered trademark) established by HDMI (High-Definition Multimedia Interface (HDMI Licensing, LLC registered trademark)) standard and VESA (Video Electronics Standards Association) Or a trademark) standard. As an in-vehicle network, Ethernet (registered trademark) AVB (Audio Video Bridge) is standardized.
 前記の低遅延化及び高画質化を実現するデジタル画像の送受信の方式においては、以下の特許文献1には、「カメラ1からの映像信号を圧縮符号化して送出する映像伝送装置2と、この映像伝送装置2から受信した圧縮符号化映像信号を復号化して映像表示部11に表示すると共に、カメラ1を制御する操作端末3とを含む映像伝送システム及び映像伝送制御方法であって、映像伝送装置2は、カメラ1からの映像信号を圧縮符号化する低遅延モード符号化部7と、高品質モード符号化部8と、モード選択部6と、操作端末3からのカメラ制御信号を受信した時に、モード選択部6を制御して低遅延モード符号化部7を選択させる制御信号処理部9とを備え、カメラ1の撮像方向又は撮像範囲を制御する時に、低遅延モードとして、操作端末1の映像表示部11に遅延時間を少なくして表示する」(図1参照)ことが記載されている。 In the digital image transmission / reception system that realizes the low delay and the high image quality, the following Patent Document 1 describes "a video transmission apparatus 2 that compresses and encodes a video signal from the camera 1 and sends it out." A video transmission system and a video transmission control method including an operation terminal 3 for decoding a compression-encoded video signal received from a video transmission device 2 and displaying it on the video display unit 11 and controlling the camera 1, The apparatus 2 receives a camera control signal from the low delay mode encoding unit 7, the high quality mode encoding unit 8, the mode selection unit 6, and the operation terminal 3 that compress and encode the video signal from the camera 1. A control signal processing unit 9 that controls the mode selection unit 6 to select the low-delay mode encoding unit 7 and controls the imaging direction or imaging range of the camera 1 as a low-delay mode, 1 of display by reducing the delay time on the video display unit 11 'has been described that (see FIG. 1).
 また、以下の特許文献2には、「復号処理能力に関する情報に基づいて復号処理の遅延時間を設定し、前記並び替え手段により並び替えられた前記係数データを符号化する符号化手段と、前記符号化手段により前記係数データが符号化されることにより生成される符号化データを、伝送路を介して受信側の情報処理装置に送信する」([0016]参照)ことが記載されている。 Further, in Patent Document 2 below, “encoding means for setting a delay time of decoding processing based on information on decoding processing capability and encoding the coefficient data rearranged by the rearranging means; and The encoded data generated by encoding the coefficient data by the encoding means is transmitted to the information processing apparatus on the receiving side via the transmission path ”(see [0016]).
特開2003-284051号公報JP 2003-284051 A 特開2009-213116号公報JP 2009-213116 A
 ところで、従来、特に、デジタル画像データの伝送方式においては、画像を用いたコミュニケーションシステム、監視システム、車載周囲監視システムなどでは、デジタル画像の低遅延伝送を必要とするという課題があった。 By the way, conventionally, especially in the transmission system of digital image data, there has been a problem that a low delay transmission of a digital image is required in a communication system using an image, a monitoring system, an in-vehicle surroundings monitoring system, and the like.
 かかる課題の解決のため、従来、高画質を実現する圧縮と低遅延を実現する圧縮との、所謂、2種類の圧縮を切り替える方式や、要求される遅延量に応じて、遅延量を設定可能な方式が提案されている。しかしながら、いずれの方式も、低遅延で圧縮する場合には、遅延量が大きい場合に比べて、画像品質の劣化を招くという課題があった。 To solve this problem, the amount of delay can be set according to the so-called two types of compression, compression that achieves high image quality and compression that achieves low delay, and the required amount of delay. Have been proposed. However, both methods have a problem in that when the compression is performed with a low delay, the image quality is deteriorated as compared with the case where the delay amount is large.
 例えば、伝送される画像データを用いて、受信側で認識処理を行い、かつ、ユーザによる視聴を行うシステムを考えた場合、かかる認識処理システムにおいては、事象の発生時点から、ユーザへのフィードバックまでの時間を零(0)に近づける必要があるが、その場合、画像品質の劣化の為に、正確な認識処理を行えないという課題があった。また、ユーザによる高画質視聴を実現する場合には、低遅延伝送ができないという課題があった。 For example, when a system that performs recognition processing on the receiving side using transmitted image data and is viewed by the user is considered, in such a recognition processing system, from the occurrence of an event to feedback to the user However, in this case, there is a problem that accurate recognition processing cannot be performed due to degradation of image quality. In addition, when realizing high-quality viewing by the user, there is a problem that low-delay transmission is not possible.
 すなわち、上述した従来技術では、画像データを高画質かつ低遅延で伝送することについては、必ずしも十分とは言えなかった。 That is, with the above-described prior art, it has not always been sufficient to transmit image data with high image quality and low delay.
 そこで、本発明では、上述した従来技術を考慮して達成されたものであり、その目的は、画像データを高画質かつ低遅延で伝送することを可能とした画像伝送装置、画像伝送方法、更には、これらを利用した監視カメラシステム、テレビ会議システム、そして、車載カメラシステムを提供することにある。 Therefore, the present invention has been achieved in view of the above-described prior art, and an object thereof is an image transmission apparatus, an image transmission method, and an image transmission apparatus that can transmit image data with high image quality and low delay. Is to provide a surveillance camera system, a video conference system, and an in-vehicle camera system using these.
 本発明によれば、上述した目的を達成するため、まず、以下の請求項に記載される画像伝送装置と画像受信装置が提供される。より具体的に述べると、画像データが入力される画像入力部と、前記画像入力部に入力された画像から圧縮画像データを生成する圧縮処理部と、前記圧縮処理部で生成した圧縮画像データを伝送路に出力するデータ転送部とを有し、前記圧縮処理部は、圧縮遅延時間の異なる複数の圧縮画像データを生成し、前記データ転送部は、前記圧縮処理部で生成した圧縮遅延時間の異なる複数の圧縮画像データを多重して出力する画像伝送装置が、そして、多重した圧縮遅延時間の異なる複数の圧縮画像データを受信してそれぞれの圧縮画像データを抽出するデータ受信部と、前記データ受信部で受信した圧縮された複数の圧縮遅延時間の異なる圧縮画像データをそれぞれ伸長し、複数の画像データを生成する伸長処理部と、付加画像データを生成する付加画像生成部と、複数の画像データを重畳する画像重畳部とを有し、前記画像重畳部は、前記伸長処理部で生成した一部の画像データに、前記伸長処理部からの付加画像データ、又は、前記伸長処理部で生成した他の画像データを重畳して出力する画像受信装置が提供される。 According to the present invention, in order to achieve the above-described object, first, an image transmission device and an image reception device described in the following claims are provided. More specifically, an image input unit to which image data is input, a compression processing unit that generates compressed image data from an image input to the image input unit, and compressed image data generated by the compression processing unit A data transfer unit for outputting to a transmission line, the compression processing unit generates a plurality of compressed image data having different compression delay times, and the data transfer unit is configured to store the compression delay time generated by the compression processing unit. An image transmission apparatus that multiplexes and outputs a plurality of different compressed image data, receives a plurality of compressed image data having different compression delay times, and extracts each compressed image data; and the data A plurality of compressed image data having different compression delay times received by the receiving unit are decompressed to generate a plurality of image data, and additional image data is generated. An image superimposing unit that superimposes a plurality of image data, and the image superimposing unit adds the additional image data from the expansion processing unit to a part of the image data generated by the expansion processing unit. Alternatively, an image receiving apparatus that superimposes and outputs other image data generated by the decompression processing unit is provided.
 加えて、本発明によれば、上述した目的を達成するため、上記画像伝送装置及び上記画像受信装置を利用したシステムとして、監視カメラシステム、テレビ会議システム、そして、車載カメラシステムが提供される。 In addition, according to the present invention, in order to achieve the above-described object, a surveillance camera system, a video conference system, and an in-vehicle camera system are provided as systems using the image transmission device and the image reception device.
 上述した本発明によれば、画像データを高画質かつ低遅延で伝送することが可能な画像伝送装置と画像受信装置が、更には、これらを利用して、実用的にも優れた監視カメラシステム、テレビ会議システム、そして、車載カメラシステムが提供される。 According to the above-described present invention, an image transmission apparatus and an image receiving apparatus capable of transmitting image data with high image quality and low delay, and further, using these, a surveillance camera system that is practically excellent. A video conference system and an in-vehicle camera system are provided.
本発明の第一実施例における画像送信装置及び画像受信装置の一例を示すブロック図である。It is a block diagram which shows an example of the image transmitter in the 1st Example of this invention, and an image receiver. 上記の実施例における画像データの有効/ブランキング期間の一例を示す図である。It is a figure which shows an example of the effective / blanking period of the image data in said Example. 上記の実施例における画像処理部の詳細構成の一例を示すブロック図である。It is a block diagram which shows an example of the detailed structure of the image process part in said Example. 上記の実施例における圧縮処理部の詳細構成の一例を示すブロック図である。It is a block diagram which shows an example of a detailed structure of the compression process part in said Example. 上記の実施例における入力画像、画像処理部、付加画像生成部、画像重畳部の出力画像の一例を示す図である。It is a figure which shows an example of the output image of the input image in said Example, an image process part, an additional image generation part, and an image superimposition part. 上記の実施例における伸長処理部の詳細構成の一例を示すブロック図である。It is a block diagram which shows an example of a detailed structure of the expansion | extension process part in said Example. 上記の実施例における画像入力及び出力の一例を示すタイミング図である。It is a timing diagram which shows an example of the image input and output in said Example. 上記の実施例における画像入力及び出力の他の例を示すタイミング図である。It is a timing diagram which shows the other example of the image input and output in said Example. 上記の実施例における画像入力及び出力の更に他の例を示すタイミング図である。It is a timing diagram which shows the further another example of the image input and output in said Example. 上記の実施例における画像入力及び出力の更に他の例を示すタイミング図である。It is a timing diagram which shows the further another example of the image input and output in said Example. 上記の実施例において最終的に表示される表示画像のフレーム毎のタイミングの一例を示すタイミング図である。It is a timing diagram which shows an example of the timing for every flame | frame of the display image finally displayed in said Example. 上記本発明の画像送信装置及び画像受信装置を採用した監視カメラシステムの構成の一例を示す全体ブロック図である。It is a whole block diagram which shows an example of a structure of the surveillance camera system which employ | adopted the image transmitter of the said this invention, and an image receiver. 上記本発明の画像送信装置及び画像受信装置を採用したテレビ会議システムの構成の一例を示す全体ブロック図である。It is a whole block diagram which shows an example of a structure of the video conference system which employ | adopted the image transmitter of the said invention, and an image receiver. 上記本発明の画像送信装置及び画像受信装置を採用した車載カメラシステムの構成の一例を示す全体ブロック図である。It is a whole block diagram which shows an example of a structure of the vehicle-mounted camera system which employ | adopted the image transmitter of the said this invention, and an image receiver.
 以下、本発明になる幾つかの実施の形態の詳細について、添付の図面を用いて説明する。 Hereinafter, details of some embodiments according to the present invention will be described with reference to the accompanying drawings.
 まず、本発明の第一の実施例(実施例1)になる画像伝送装置、及び、画像受信装置について説明する。 First, an image transmission apparatus and an image reception apparatus according to a first embodiment (embodiment 1) of the present invention will be described.
 図1は、本実施例の画像伝送システムを示すブロック図であり、当該画像伝送システムは、図からも明らかなように、画像伝送装置1と画像受信装置2をケーブル3で接続した構成である。 FIG. 1 is a block diagram showing an image transmission system according to the present embodiment, and the image transmission system has a configuration in which an image transmission apparatus 1 and an image reception apparatus 2 are connected by a cable 3 as is apparent from the figure. .
 画像伝送装置1は、画像データを圧縮して伝送する画像伝送装置であり、より具体的には、デジタル放送を受信し視聴できるようにデコードした画像データや、カメラなどで撮影した画像データを、HDMIケーブルやLANなどにより、他の機器に画像データを出力する、所謂、画像記録再生機器である。かかる画像伝送装置1の一例としては、レコーダ、レコーダ機能を内蔵したデジタルTV、レコーダ機能を内蔵したパソコン、カメラ機能やレコーダ機能を搭載した携帯電話、カムコーダ、車載カメラなどが挙げられる。 The image transmission device 1 is an image transmission device that compresses and transmits image data. More specifically, the image transmission device 1 receives image data that has been decoded so that the digital broadcast can be received and viewed, or image data that has been captured by a camera, This is a so-called image recording / playback device that outputs image data to another device via an HDMI cable or a LAN. Examples of the image transmission apparatus 1 include a recorder, a digital TV with a built-in recorder function, a personal computer with a built-in recorder function, a mobile phone with a camera function and a recorder function, a camcorder, an in-vehicle camera, and the like.
 画像受信装置2は、HDMIケーブルやLAN等を使用して、画像データを入力し、画像をモニタに出力する表示機器である。かかる画像受信装置2の一例としては、デジタルTVや、ディスプレイ、プロジェクタ、携帯電話、サイネージ機器、車載周辺監視機器などが挙げられる。 The image receiving device 2 is a display device that inputs image data and outputs an image to a monitor using an HDMI cable, a LAN, or the like. Examples of the image receiving device 2 include a digital TV, a display, a projector, a mobile phone, a signage device, an in-vehicle peripheral monitoring device, and the like.
 ケーブル1000は、画像伝送装置1と画像受信装置2の機器間で画像データ等のデータ通信を行うデータ伝送路である。かかるケーブル3の一例として、HDMI規格、DisplayPort規格、Ethernetに対応した有線ケーブルもしくは、無線方式のデータ通信を行うデータ伝送路などがある。 The cable 1000 is a data transmission path for performing data communication such as image data between the devices of the image transmission device 1 and the image reception device 2. As an example of the cable 3, there is a wired cable compatible with the HDMI standard, the DisplayPort standard, and the Ethernet, or a data transmission path for performing wireless data communication.
 続いて、上記画像伝送装置1の構成の詳細について説明する。 Next, details of the configuration of the image transmission apparatus 1 will be described.
 画像入力部10は、画像データを画像伝送装置1に入力するための入力部である。この入力部10に入力される画像データの一例としては、監視カメラや、車載カメラから入力されるデジタル画像データがある。 The image input unit 10 is an input unit for inputting image data to the image transmission apparatus 1. As an example of the image data input to the input unit 10, there is digital image data input from a monitoring camera or an in-vehicle camera.
 画像処理部11は、画像入力部10からの画像データに対して、デジタル画像処理して、デジタル画像処理後の画像データを、圧縮処理部12に出力する。デジタル画像処理の一例としては、回転、拡大もしくは縮小処理、フレームレート変換処理、エッジ抽出、動きベクトル抽出、高周波成分除去、ノイズ除去などがある。 The image processing unit 11 performs digital image processing on the image data from the image input unit 10 and outputs the image data after the digital image processing to the compression processing unit 12. Examples of digital image processing include rotation, enlargement or reduction processing, frame rate conversion processing, edge extraction, motion vector extraction, high frequency component removal, noise removal, and the like.
 圧縮処理部12は、画像入力部10からの画像データ及び画像処理部11からの画像データに圧縮処理を施し、データ転送部13に出力する。 The compression processing unit 12 performs compression processing on the image data from the image input unit 10 and the image data from the image processing unit 11 and outputs the compressed data to the data transfer unit 13.
 データ転送部13は、圧縮処理部12で圧縮された2種類の圧縮画像データ(圧縮画像データA及び圧縮画像データB)をケーブル伝送に適した形式の信号に変換してケーブル3へ出力する。ケーブル伝送に適した形式の信号の一例が、HDMI規格に記載されている。HDMIにおいて、画像データは、TMDS方式のデータ伝送フォーマットが採用されている。また、ケーブル伝送に適した形式の信号の別の一例が、車載向けEthernetで使用されるIEEE P1722規格に記載されている。P1722規格において、圧縮画像データは、AVTP(Audio Video Transport Protocol) Video Protocolに従い伝送される。 The data transfer unit 13 converts the two types of compressed image data (compressed image data A and compressed image data B) compressed by the compression processing unit 12 into signals in a format suitable for cable transmission and outputs the signals to the cable 3. An example of a signal in a format suitable for cable transmission is described in the HDMI standard. In HDMI, image data adopts a TMDS data transmission format. Another example of a signal in a format suitable for cable transmission is described in the IEEE P1722 standard used in in-vehicle Ethernet. In the P1722 standard, the compressed image data is transmitted according to AVTP (Audio Video Transport Protocol) Video Protocol.
 ユーザIF部14は、画像伝送装置1の動作を制御するための信号を入力するための入力部である。このユーザIF部14の一例として、リモコンの受信部などがある。また、ユーザIF部14からの制御信号は、制御部15に出力する。制御部15は、ユーザIF部14の信号に従い、画像伝送装置1の全体を制御する。制御部15の一例としては、マイクロプロセッサなどがある。画像伝送装置1からの画像データは、ケーブル3を介して画像受信装置2に供給する。 The user IF unit 14 is an input unit for inputting a signal for controlling the operation of the image transmission apparatus 1. An example of the user IF unit 14 is a remote control receiving unit. The control signal from the user IF unit 14 is output to the control unit 15. The control unit 15 controls the entire image transmission apparatus 1 in accordance with the signal from the user IF unit 14. An example of the control unit 15 is a microprocessor. Image data from the image transmission apparatus 1 is supplied to the image reception apparatus 2 via the cable 3.
 次に、画像受信装置2の構成の詳細について説明する。 Next, details of the configuration of the image receiving apparatus 2 will be described.
 データ受信部20は、ケーブル伝送に適した形式の信号が入力される。このデータ受信部20に入力された信号は、ケーブル伝送に適した形式の信号から、所定のデジタルデータに変換する処理が施され、2種類の圧縮画像データ(圧縮画像データA及び圧縮画像データB)が抽出されて、伸張処理部21に出力される。 Data receiving unit 20 receives a signal in a format suitable for cable transmission. The signal input to the data receiving unit 20 is converted into a predetermined digital data from a signal in a format suitable for cable transmission, and two types of compressed image data (compressed image data A and compressed image data B) are processed. ) Are extracted and output to the decompression processing unit 21.
 伸張処理部21は、上記画像伝送装置1内の圧縮処理部12で施した圧縮処理を伸張し、2種類の画像データ(復号画像データA及び復号画像データB)を生成する。また、復号画像データAを画像重畳部23へ、復号画像データBを付加画像生成部22へ出力する。 The decompression processing unit 21 decompresses the compression processing performed by the compression processing unit 12 in the image transmission apparatus 1 to generate two types of image data (decoded image data A and decoded image data B). Also, the decoded image data A is output to the image superimposing unit 23, and the decoded image data B is output to the additional image generating unit 22.
 付加画像生成部22は、入力された画像データBに対して、画像特徴の抽出を行い、その結果をもとに付加画像データを生成して、前記付加画像データを画像重畳部23に出力する。画像特徴抽出の一例として、物体検出、移動物検出、状態検出、エッジ抽出、予め規定した閾値範囲内の輝度を有する画素の抽出などがある。また、物体検出の一例としては、車載周辺監視システムにおける車両検出、飛び込み物検出、レーン検出、路面検出、信号機などの物体検出などがある。移動体検出の一例としては、監視システムにおける不審者検出がある。状態検出の一例として、赤信号検出、路面状態検出、天候検出、逆行検出などがある。また、付加画像生成部22は、入力された画像データBをそのまま出力してもよいし、画像特徴抽出情報を出力してもよい。 The additional image generation unit 22 extracts image features from the input image data B, generates additional image data based on the result, and outputs the additional image data to the image superimposing unit 23. . Examples of image feature extraction include object detection, moving object detection, state detection, edge extraction, extraction of pixels having luminance within a predetermined threshold range, and the like. Examples of object detection include vehicle detection, dive object detection, lane detection, road surface detection, and object detection such as traffic lights in an in-vehicle periphery monitoring system. One example of moving object detection is suspicious person detection in a monitoring system. Examples of state detection include red signal detection, road surface state detection, weather detection, and retrograde detection. Further, the additional image generation unit 22 may output the input image data B as it is or may output image feature extraction information.
 画像重畳部23は、伸長部21及び付加画像生成部22から入力された2種類の画像データを重畳した画像データを生成し、表示部24に出力する。 The image superimposing unit 23 generates image data in which two types of image data input from the decompressing unit 21 and the additional image generating unit 22 are superimposed, and outputs the generated image data to the display unit 24.
 画像重畳部23は、画像を一時蓄積するためのメモリ部及びメモリ制御部を有する。復号画像データAが画像重畳部23に入力されるまでの期間、付加画像データを前記メモリ部に一時的に蓄積し、復号画像データAと付加画像データを重畳処理するタイミングで、前記メモリ部から付加画像データを読み出し、重畳処理を行う。また、画像重畳部23は、復号画像データBまたは付加画像データを用いて、復号画像データAの高画質化処理を行ってもよい。高画質化処理の一例として、エッジ検出情報を含む復号画像データBまたは付加画像データを用いて、復号画像データAのエッジ強調処理を行う方法がある。 The image superimposing unit 23 includes a memory unit and a memory control unit for temporarily storing images. During the period until the decoded image data A is input to the image superimposing unit 23, the additional image data is temporarily stored in the memory unit, and the decoded image data A and the additional image data are superimposed from the memory unit at a timing of superimposing processing. The additional image data is read and a superimposition process is performed. Further, the image superimposing unit 23 may perform the image quality improvement processing of the decoded image data A using the decoded image data B or the additional image data. As an example of image quality enhancement processing, there is a method of performing edge enhancement processing of decoded image data A using decoded image data B or additional image data including edge detection information.
 表示部24は、入力された画像データを表示方式にあわせた信号に変換し画面に表示する。表示部24の一例としては、液晶ディスプレイや、プラズマディスプレイや、有機EL(Electro-Luminescence)ディスプレイ、プロジェクタ投影ディスプレイなどの表示部がある。 The display unit 24 converts the input image data into a signal suitable for the display method and displays it on the screen. Examples of the display unit 24 include a display unit such as a liquid crystal display, a plasma display, an organic EL (Electro-Luminescence) display, and a projector projection display.
 ユーザIF部25は、画像受信装置2の動作を制御するための信号を入力するための入力部である。このユーザIF部25の一例としては、リモコンの受信部などがある。ユーザIF部25からの制御信号は、制御部26に供給される。制御部26は、ユーザIF部25の信号に従い、画像受信装置2全体を制御する制御部である。 The user IF unit 25 is an input unit for inputting a signal for controlling the operation of the image receiving device 2. An example of the user IF unit 25 is a remote control receiving unit. A control signal from the user IF unit 25 is supplied to the control unit 26. The control unit 26 is a control unit that controls the entire image receiving apparatus 2 in accordance with a signal from the user IF unit 25.
 図2は、1フレーム期間の画像データが伝送される有効領域と画像データが伝送されないブランキング期間を示す図である。 FIG. 2 is a diagram showing an effective area in which image data in one frame period is transmitted and a blanking period in which image data is not transmitted.
 符号400で示す領域が垂直期間であり、この垂直期間400は、垂直ブランキング期間401と垂直有効期間402から構成される。VSYNC信号は、垂直ブランキング期間401の先頭から規定されたライン数の間を1とし、その他の垂直ブランキング期間と垂直有効期間402の間は0とした1bitの信号である。規定されたライン数の一例としては、4ラインなどがある。 A region indicated by reference numeral 400 is a vertical period, and the vertical period 400 includes a vertical blanking period 401 and a vertical effective period 402. The VSYNC signal is a 1-bit signal in which 1 is set between the number of lines defined from the top of the vertical blanking period 401 and 0 is set between the other vertical blanking periods and the vertical effective period 402. An example of the prescribed number of lines is 4 lines.
 符号403で示す領域が水平期間であり、この水平期間403は、水平ブランキング期間404と水平有効期間405から構成される。HSYNC信号は、水平ブランキング期間404の先頭から規定された画素数の間を1とし、その他の水平ブランキング期間と水平有効期間405の間は0とした1bitの信号である。規定された画素数の一例としては、40画素がある。 An area indicated by a reference numeral 403 is a horizontal period, and the horizontal period 403 includes a horizontal blanking period 404 and a horizontal effective period 405. The HSYNC signal is a 1-bit signal in which 1 is set between the number of pixels defined from the head of the horizontal blanking period 404 and 0 is set between the other horizontal blanking periods and the horizontal effective period 405. An example of the prescribed number of pixels is 40 pixels.
 有効期間406は、垂直有効期間402と水平有効期間405の期間とに囲まれた領域であり、この期間に画像データが割り当てられる。また、ブランキング期間407は、垂直ブランキング期間401と水平ブランキング期間404の期間に囲まれた領域である。 The effective period 406 is an area surrounded by a vertical effective period 402 and a horizontal effective period 405, and image data is allocated to this period. The blanking period 407 is an area surrounded by a vertical blanking period 401 and a horizontal blanking period 404.
 本実施例においては、上記の構成において、有効期間406に圧縮画像データと副圧縮符号情報を送信し、ブランキング期間407に主圧縮符号情報を送信する。なお、ブランキング期間407では、音声データやその他の付属データをパケット化したデータを伝送している。 In this embodiment, in the above configuration, the compressed image data and the sub-compression code information are transmitted during the effective period 406, and the main compression code information is transmitted during the blanking period 407. In the blanking period 407, data obtained by packetizing audio data and other attached data is transmitted.
 この音声データ等のパケットを、ブランキング期間407において、信頼性のあるパケットとして送る方法は、例えば、特表2005-514873号公報に開示されている。 A method of sending a packet of voice data or the like as a reliable packet in the blanking period 407 is disclosed in, for example, Japanese translations of PCT publication No. 2005-514873.
 かかる構成によれば、ブランキング期間のパケットのデータに対しては誤り訂正符号が入っているため、伝送路で発生したエラーに対して補正ができエラー耐性が強くなる。また、ブランキング期間のパケットのデータ伝送用のデータは、物理的に異なる2つのチャンネルに伝送する構成とし、一定時間毎に伝送するチャンネルを切り替えている。そのため、片側のチャンネルでバースト的に発生したエラーに対して、他方のチャンネルが影響されないため、データエラーの補正を行うことができる。エラーの訂正率は、水平有効期間が10-9に対して、水平ブランキング期間は10-14の改善効果がある。 According to this configuration, since the error correction code is included in the packet data in the blanking period, it is possible to correct an error occurring in the transmission path, and the error tolerance is increased. Further, the data for data transmission of the packet in the blanking period is transmitted to two physically different channels, and the channel to be transmitted is switched every certain time. For this reason, an error occurring in a burst manner in one channel is not affected by the other channel, so that a data error can be corrected. The error correction rate has an improvement effect of 10 −14 in the horizontal blanking period versus 10 −9 in the horizontal effective period.
 図3は、上述した画像処理部11の詳細な一例を示す図である。フレーム間引部110は、画像入力部10から供給される入力画像データのフレームを間引くブロックである。一例として、30fps(frame / second)で入力される入力画像データに対して、1枚おきに間引くことで、15fpsの画像データを出力する。なお、この間引きを行わず、入力画像データをそのフレームレートのままで出力してもよい。縮小部111は、入力画像データに対して、前後nラインの画素値を用いたnタップフィルタリング処理(n=正の数)または画素間引き処理を行い、縮小画像を生成して出力する。縮小画像を生成する方法の一例として、入力画像の偶数ラインの画像データのみを出力して、垂直1/2の縮小画像を生成する方法がある。画像特徴抽出画生成部112は、入力画像データに対して、入力画像の特徴を抽出して、特徴以外の画像データを簡略化または省略した画像データを生成する。画像データに対する特徴抽出処理の一例として、1次微分フィルタ(エッジ検出、線検出)、2次微分検出(ラプラシアンフィルタ)がある。エッジ検出の一例として、Sobelフィルタがある。画像処理部11の構成要素として、フレーム間引部110、縮小部111、画像特徴抽出画生成部112を記載したが、全ての構成要素を含まなくてもよい。例えば、画像処理部11が、縮小部111のみで構成されてもよいし、画像特徴抽出画生成部112だけで構成されてもよい。 FIG. 3 is a diagram illustrating a detailed example of the image processing unit 11 described above. The frame thinning unit 110 is a block for thinning out frames of input image data supplied from the image input unit 10. As an example, image data of 15 fps is output by thinning out every other image for input image data input at 30 fps (frame / second). Note that the input image data may be output at the frame rate without performing the thinning. The reduction unit 111 performs n-tap filtering processing (n = positive number) or pixel thinning processing using pixel values of n lines before and after the input image data, and generates and outputs a reduced image. As an example of a method of generating a reduced image, there is a method of generating only a vertical ½ reduced image by outputting only even-line image data of an input image. The image feature extraction image generation unit 112 extracts features of the input image from the input image data, and generates image data obtained by simplifying or omitting image data other than the features. As an example of the feature extraction processing for image data, there are a first-order differential filter (edge detection, line detection) and a second-order differential detection (Laplacian filter). An example of edge detection is a Sobel filter. Although the frame thinning unit 110, the reduction unit 111, and the image feature extraction image generation unit 112 have been described as the components of the image processing unit 11, all the components may not be included. For example, the image processing unit 11 may be configured by only the reduction unit 111 or may be configured by only the image feature extraction image generation unit 112.
 図4は、圧縮処理部12の詳細な一例を示す図である。本実施例では、視聴用で使用する圧縮画像データAを生成する圧縮部Aと、認識用で使用する圧縮画像データBを生成する圧縮部Bを有する。圧縮部Aにおける圧縮処理遅延に比べて、圧縮部Bにおける圧縮処理遅延が短いことを特徴とする。 FIG. 4 is a diagram illustrating a detailed example of the compression processing unit 12. In the present embodiment, a compression unit A that generates compressed image data A used for viewing and a compression unit B that generates compressed image data B used for recognition are provided. Compared with the compression processing delay in the compression unit A, the compression processing delay in the compression unit B is short.
 圧縮部Aは、画像入力部10から供給される画像データに対して、高品質な圧縮処理を行う。圧縮処理の一例として、H.264のBピクチャを用いた圧縮処理がある。Bピクチャは、過去フレーム及び未来のフレームを用いて圧縮を行う方式である。この方式は、未来のフレームを用いるために、圧縮処理を行う際に、1フレーム時間以上の遅延が発生するため、低遅延伝送には不向きであるが、高品質な復号画像を得ることが可能であり、視聴用途として適した方式である。 The compression unit A performs high-quality compression processing on the image data supplied from the image input unit 10. As an example of the compression process, H.264 There is a compression process using H.264 B pictures. B picture is a method of performing compression using past frames and future frames. This method is not suitable for low-delay transmission because a delay of one frame time or more occurs when performing compression processing because a future frame is used, but a high-quality decoded image can be obtained. This is a method suitable for viewing purposes.
 圧縮部Bは、画像処理部11から供給される画像データに対して、低遅延な圧縮処理を行う。圧縮処理の一例として、H.264のPピクチャを用いた圧縮処理がある。Pピクチャは、過去フレームを用いて圧縮を行う方式である。この方式は、未来のフレームを用いないために、圧縮部Bで発生する未来のフレーム待ちに伴う圧縮遅延が発生しないため、物体検出などの認識用途として適した方式である。なお、画像処理部11から供給される画像データは、画像処理部11において、画像の情報量を削減するデジタル画像処理が施された画像データであることから、画像入力部10から供給される画像データに対して、圧縮後の発生符号量を大幅に削減することが可能である。言い換えると、特徴抽出した画像データに対して、より多くの符号量を割り当てることで、圧縮部Aに対して、より高画質な圧縮を行うことができる。 The compression unit B performs low-delay compression processing on the image data supplied from the image processing unit 11. As an example of the compression process, H.264 There is a compression process using H.264 P pictures. P picture is a method of performing compression using past frames. This method is suitable for recognition applications such as object detection because a future frame is not used and a compression delay associated with waiting for a future frame generated in the compression unit B does not occur. The image data supplied from the image processing unit 11 is image data that has been subjected to digital image processing for reducing the amount of image information in the image processing unit 11. It is possible to greatly reduce the amount of generated codes after data compression. In other words, by assigning a larger amount of code to the feature-extracted image data, the compression unit A can be compressed with higher image quality.
 メモリ制御部122は、圧縮部Bから供給される圧縮画像データBを、メモリ部123に一時的に蓄積するための制御を行う。この制御の一例として、有効期間406で処理した圧縮画像データBを、メモリ部123に一時的に蓄積して、その後のブランキング期間407で圧縮画像データBを出力する方式がある。 The memory control unit 122 performs control for temporarily storing the compressed image data B supplied from the compression unit B in the memory unit 123. As an example of this control, there is a method of temporarily storing the compressed image data B processed in the valid period 406 in the memory unit 123 and outputting the compressed image data B in the subsequent blanking period 407.
 続いて、図5(A)~(D)に、付加画像生成部22及び画像重畳部23の入力画像データ及び出力画像データの一例を示す。 Subsequently, FIGS. 5A to 5D show examples of input image data and output image data of the additional image generating unit 22 and the image superimposing unit 23. FIG.
 図5(A)に示す画像50は、伸長処理部21から画像重畳部23に供給される画像データ(前記圧縮画像データAの復号画像)を画像表示した例を示している。即ち、画像50には、走行路面500上に走行車501と走行車502の2台が描画されている。走行車501は、側方から自車走行車線上に割り込んできた車、走行車502は、自車前方遠方の車である。背景503も描画されている。 An image 50 shown in FIG. 5A shows an example in which image data (decoded image of the compressed image data A) supplied from the decompression processing unit 21 to the image superimposing unit 23 is displayed as an image. That is, in the image 50, two vehicles, a traveling vehicle 501 and a traveling vehicle 502, are drawn on the traveling road surface 500. The traveling vehicle 501 is a vehicle that has been interrupted on the own vehicle traveling lane from the side, and the traveling vehicle 502 is a vehicle far ahead of the own vehicle. A background 503 is also drawn.
 図5(B)に示す画像51は、伸長処理部21から付加画像生成部22に供給される画像データ(前記圧縮画像データBの復号画像)を画像表示した例を示している。即ち、圧縮画像データBは、画像処理部11でエッジ検出後の画像データとなっており、エッジ情報以外の画像情報は簡易化または除去されている。また、描画511は、走行車501のエッジ抽出後の画像である。描画512は、走行車502のエッジ抽出後の画像である。 An image 51 shown in FIG. 5B shows an example in which image data (decoded image of the compressed image data B) supplied from the decompression processing unit 21 to the additional image generation unit 22 is displayed as an image. That is, the compressed image data B is image data after edge detection by the image processing unit 11, and image information other than the edge information is simplified or removed. A drawing 511 is an image after the edge of the traveling vehicle 501 is extracted. A drawing 512 is an image after edge extraction of the traveling vehicle 502.
 図5(C)に示す画像52は、付加画像生成部22から画像重畳部23に供給される画像データを画像表示した例である。付加画像生成部22では、入力される画像51をもとに、車両検出を来ない、検出された車両を囲む矩形の図形を生成する。矩形の図形の色は、自車からの距離や方向によって変えてもよい。また、点滅など、時間的に変化する図家としてもよい。矩形図521は、走行車501を示す。矩形図522は、走行車502を示す。 An image 52 shown in FIG. 5C is an example in which the image data supplied from the additional image generating unit 22 to the image superimposing unit 23 is displayed as an image. The additional image generation unit 22 generates, based on the input image 51, a rectangular figure surrounding the detected vehicle that does not detect the vehicle. The color of the rectangular figure may be changed according to the distance and direction from the vehicle. Moreover, it is good also as a photographer which changes in time, such as blinking. A rectangular figure 521 shows a traveling vehicle 501. Rectangular view 522 shows traveling vehicle 502.
 図5(D)に示す画像53は、画像重畳部23から表示部24に供給される画像データを画像表示した例である。 An image 53 shown in FIG. 5D is an example in which image data supplied from the image superimposing unit 23 to the display unit 24 is displayed as an image.
 図6は、伸長処理部21の詳細な一例を示す図である。 FIG. 6 is a diagram illustrating a detailed example of the decompression processing unit 21.
 図において、伸長部A210は、圧縮画像データAに対する伸長処理を行い、復号画像データAを生成する。伸長部B211は、圧縮画像データBに対する伸長処理を行い、復号画像データBを生成する。 In the figure, the decompression unit A210 performs decompression processing on the compressed image data A to generate decoded image data A. The decompression unit B211 performs decompression processing on the compressed image data B and generates decoded image data B.
 図7は、本実施例における、画像入力及び出力タイミング図の一例を示す図である。 FIG. 7 is a diagram showing an example of an image input and output timing chart in the present embodiment.
 図において、信号600は、画像伝送装置1の入力画像604、605に対する垂直ブランキング信号を示す。信号610は、画像伝送装置1の出力画像614、615、617に対する垂直ブランキング信号を示す。信号620は、画像受信装置2の画像重畳部23の入力画像624、625、627に対する垂直ブランキング信号を示す。信号630は、画像受信装置2の画像重畳部23の出力画像631に対する垂直ブランキング信号を示す。上記垂直ブランキング信号600、610、620、630は、有効期間601とブランキング期間602で構成される。 In the figure, a signal 600 indicates a vertical blanking signal for the input images 604 and 605 of the image transmission apparatus 1. A signal 610 indicates a vertical blanking signal for the output images 614, 615, and 617 of the image transmission apparatus 1. A signal 620 indicates a vertical blanking signal for the input images 624, 625, and 627 of the image superimposing unit 23 of the image receiving device 2. A signal 630 indicates a vertical blanking signal for the output image 631 of the image superimposing unit 23 of the image receiving device 2. The vertical blanking signals 600, 610, 620, and 630 are composed of an effective period 601 and a blanking period 602.
 出力画像614は、入力画像604を、前記圧縮部Bで圧縮した圧縮画像データBである。出力画像617は、入力画像604を、前記圧縮部Aで圧縮した圧縮画像データAである。圧縮画像データBに対する圧縮遅延時間618は、圧縮画像データAに対する圧縮遅延時間619よりも短い。本例では、圧縮部Aでは、1フレーム分未来のフレームを用いて圧縮処理する場合の一例を示している。 The output image 614 is compressed image data B obtained by compressing the input image 604 by the compression unit B. The output image 617 is compressed image data A obtained by compressing the input image 604 by the compression unit A. The compression delay time 618 for the compressed image data B is shorter than the compression delay time 619 for the compressed image data A. In this example, the compression unit A shows an example in which compression processing is performed using a future frame for one frame.
 入力画像624は、圧縮画像データ614を、前記伸長部Bで伸長処理した復号画像データBである。 The input image 624 is decoded image data B obtained by decompressing the compressed image data 614 by the decompression unit B.
 画像データ626は、前記付加画像生成部22において、圧縮画像データ624に対する認識処理時間628後に生成される付加画像である。入力画像627は、入力画像614を、前記伸長部Aで伸長処理した復号画像データAである。 The image data 626 is an additional image generated after the recognition processing time 628 for the compressed image data 624 in the additional image generation unit 22. The input image 627 is decoded image data A obtained by decompressing the input image 614 by the decompression unit A.
 出力画像631は、前記画像データ626と入力画像627を重畳した後の画像データである。 The output image 631 is image data after the image data 626 and the input image 627 are superimposed.
 図8は、本実施例における、画像入力及び出力タイミング図の別の一例を示す図である。なお、この図においても、上記図7と同一番号を付した信号については、上記図7と同一であることから、その説明を省略する。 FIG. 8 is a diagram showing another example of an image input and output timing diagram in the present embodiment. In this figure as well, signals having the same numbers as those in FIG. 7 are the same as those in FIG.
 即ち、本実施例では、低遅延で圧縮処理した圧縮画像データB614を有効期間とブランキング期間の両方で伝送することが特徴であり、かかる方式により、圧縮画像データB614に冗長性を持たせることで、伝送エラー発生時のエラー耐性を向上することが可能となる。画像受信装置2における付加画像生成部22では、復号画像データB624に異常がある場合、復号画像データB726を用いることで、伝送エラー耐性を向上する。 That is, the present embodiment is characterized in that the compressed image data B614 compressed with a low delay is transmitted in both the effective period and the blanking period, and by this method, the compressed image data B614 is made redundant. Thus, it is possible to improve error tolerance when a transmission error occurs. In the additional image generation unit 22 in the image receiving device 2, when there is an abnormality in the decoded image data B624, the transmission error resistance is improved by using the decoded image data B726.
 図8は、本実施例における、画像入力及び出力タイミング図の別の一例を示す図である。 FIG. 8 is a diagram showing another example of an image input and output timing diagram in the present embodiment.
 図からも明らかなように、この例は、上記図1で示す画像伝送装置1において、有効期間での圧縮画像データB(上記図7の符号614を参照)の伝送をなくしたことが特徴である。かかる方式によれば、有効期間中に伝送する圧縮画像データを、圧縮画像データAのみとすることで、復号画像データAの更なる高画質化が可能となる。 As is apparent from the figure, this example is characterized in that the transmission of the compressed image data B (see reference numeral 614 in FIG. 7) in the effective period is eliminated in the image transmission apparatus 1 shown in FIG. is there. According to this method, only the compressed image data A is transmitted as the compressed image data during the effective period, so that the image quality of the decoded image data A can be further improved.
 図10は、本実施例における、画像入力及び出力タイミング図の更に別の一例を示す図である。 FIG. 10 is a diagram showing still another example of the image input and output timing chart in the present embodiment.
 図からも明らかなように、この例は、上記図1で示す画像伝送装置1において、画像伝送装置1の圧縮部A(上記図4の符号120を参照)が、2フレーム先までのフレームを用いて圧縮処理を可能とした一例である。本方式により、復号画像データAのさらなる高画質化が可能となる。なお、この圧縮部Aは、さらにNフレーム先までのフレームを用いて圧縮処理を行う構成でもよい。また、上記の図7、図8、図9においても、さらにNフレーム先までのフレームを用いて圧縮処理を行う構成でもよい。 As is apparent from the figure, in this example, in the image transmission apparatus 1 shown in FIG. 1, the compression unit A (see reference numeral 120 in FIG. 4) of the image transmission apparatus 1 displays frames up to two frames ahead. It is an example which enabled compression processing by using. With this method, it is possible to further improve the image quality of the decoded image data A. The compression unit A may be configured to perform compression processing using frames up to N frames ahead. Also, in FIGS. 7, 8, and 9, the compression processing may be performed using frames up to N frames ahead.
 更に、図11は、本実施例における、入力画像データ、圧縮画像データA、圧縮画像データB、付加画像及び最終的に表示される表示画像のフレーム毎のタイミング図の一例を示している。 Further, FIG. 11 shows an example of a timing chart for each frame of the input image data, the compressed image data A, the compressed image data B, the additional image, and the finally displayed display image in the present embodiment.
 また、図の際下段には、前記P1722規格におけるAVTP Video Protocolで定義されているAPVF Video PDU format のAPVF Headerを示す。 In the lower part of the figure, the APVF Header of the APVF Video PDU format defined by the AVTP Video Protocol in the P1722 standard is shown.
 以下、圧縮画像データA及び圧縮画像データBを、AVTP Video Protocolで伝送する場合を例について説明する。 Hereinafter, an example in which the compressed image data A and the compressed image data B are transmitted by AVTP Video Protocol will be described.
 フレーム1402、1404は、入力画像データ1401に対する圧縮画像データである。なお、本例では、フレーム1401内に人物1409が存在する。圧縮画像データA(フレーム1402)は、少ない遅延で圧縮される。一方、圧縮画像データB(フレーム1404)は、圧縮画像データAに対して、より多くの時間と、より広範囲のフレーム情報を用いて、より少ない圧縮劣化で圧縮される。これにより、圧縮画像データBは、圧縮画像データAに対して、高い画像品質を実現する。 Frames 1402 and 1404 are compressed image data for the input image data 1401. In this example, a person 1409 exists in the frame 1401. The compressed image data A (frame 1402) is compressed with a small delay. On the other hand, the compressed image data B (frame 1404) is compressed with less compression deterioration with respect to the compressed image data A by using more time and wider range of frame information. As a result, the compressed image data B achieves higher image quality than the compressed image data A.
 付加画像1406は、前記フレーム1402に対して伸長処理及び人物認識処理を行い、フレーム内の人物の周辺に描画する枠の画像である。 The additional image 1406 is a frame image that is drawn around the person in the frame by performing an expansion process and a person recognition process on the frame 1402.
 フレーム1405は、モニタに表示する画像である。1407は、前記圧縮画像データB(フレーム1404)に対して伸長処理することで得られる復号画像(人物)と、前記付加画像1406(枠の画像)を重畳することで得られる画像である。 The frame 1405 is an image displayed on the monitor. Reference numeral 1407 denotes an image obtained by superimposing the decoded image (person) obtained by decompressing the compressed image data B (frame 1404) and the additional image 1406 (frame image).
 本例に示すように、入力画像に対して、2種類の圧縮画像データ(圧縮画像データA及び圧縮画像データB)を生成して、画像伝送装置より出力するが、1フレーム以上の遅延の差を設けている。また、画像受信装置では、圧縮画像データA及び圧縮画像データBに対して、1フレーム以上の時間差を設けて、伸長処理を行う。また、画像伝送装置では、圧縮画像データBをもとに付加画像1406を生成して、圧縮画像データBに対する伸長処理で得られる復号画像に重畳して表示する。 As shown in this example, two types of compressed image data (compressed image data A and compressed image data B) are generated from the input image and output from the image transmission apparatus. Is provided. In the image receiving apparatus, the compressed image data A and the compressed image data B are decompressed with a time difference of one frame or more. Further, the image transmission apparatus generates an additional image 1406 based on the compressed image data B, and displays the additional image 1406 by superimposing it on the decoded image obtained by the decompression process on the compressed image data B.
 即ち、本例では、入力画像の同一フレーム(フレーム1401)をもとに生成される圧縮画像データA(フレーム1401)と圧縮画像データB(フレーム1402)のfrcount値を同一とする。これにより、画像受信装置側で、1フレーム以上の伸長処理タイミングの差がある場合や、伸長処理で得られる複数の復号画像に対する後処理(本例では認識処理)時間に1フレーム以上の差がある場合においても、画像受信装置側で、入力画像の同一フレーム(フレーム1401)に対する圧縮画像データA(フレーム1401)と圧縮画像データB(フレーム1402)を判別することが出来る。 That is, in this example, the frcount values of the compressed image data A (frame 1401) and the compressed image data B (frame 1402) generated based on the same frame (frame 1401) of the input image are the same. As a result, when there is a difference in decompression processing timing of one frame or more on the image receiving device side, or there is a difference of one frame or more in post-processing (recognition processing in this example) for a plurality of decoded images obtained by the decompression processing. Even in some cases, the image receiving apparatus can determine the compressed image data A (frame 1401) and the compressed image data B (frame 1402) for the same frame (frame 1401) of the input image.
 なお、ここでは、AVTP Video Protocolを例に説明を行ったが、しかしながら、本発明はこれのみに限定されることなく、例えば、フレームカウント情報をMPEG2-TSやH.264/AVCのNALのユーザデータ領域に格納してもよい。 Here, the AVTP Video Protocol has been described as an example. However, the present invention is not limited to this example. For example, the frame count information may be MPEG2-TS or H.264. It may be stored in the user data area of the H.264 / AVC NAL.
 即ち、以上で説明した本実施例になる画像伝送装置、及び、画像受信装置によれば、画像伝送装置が伝送する画像データを遅延時間の異なる2種類の圧縮方式で圧縮して伝送することにより、高画質かつ高認識性能を両立することが可能となる。さらに、低遅延で圧縮した圧縮画像データを冗長化することで、伝送エラー耐性を向上することが可能となる。 That is, according to the image transmission apparatus and the image reception apparatus according to the present embodiment described above, the image data transmitted by the image transmission apparatus is compressed and transmitted by two types of compression methods having different delay times. It is possible to achieve both high image quality and high recognition performance. Furthermore, by making the compressed image data compressed with a low delay redundant, it is possible to improve transmission error tolerance.
 続いて、上述した本発明になる画像伝送装置、及び、画像受信装置が適用される画像伝送システムについて、図12~図14を用いて説明する。なお、かかる画像伝送システムの例として、図12は監視カメラシステムを、図13はテレビ会議システムを、そして、図14は車載カメラシステムの一例を、それぞれ、示している。 Subsequently, an image transmission system to which the above-described image transmission device and the image reception device according to the present invention are applied will be described with reference to FIGS. As an example of such an image transmission system, FIG. 12 shows a surveillance camera system, FIG. 13 shows a video conference system, and FIG. 14 shows an example of an in-vehicle camera system.
 図12において、符号1101、1102、1103は、それぞれ、A地点、B地点、C地点に設置された監視カメラ、1104は、監視カメラ1101、1102、1103で撮像された画像を受信する監視センター、1105は、インターネット回線等のワイドエリアネットワーク(WAN)である。監視カメラ1101~1103で撮像された映像は、WAN1105を介して監視センター1104内のモニタ等に表示することが可能である。なお、この図11においては、監視カメラが3つの場合の例を示しているが、車載カメラの数は2つ以下であっても、又は、4つ以上であってもよい。 In FIG. 12, reference numerals 1101, 1102, and 1103 are monitoring cameras installed at points A, B, and C, respectively, 1104 is a monitoring center that receives images captured by the monitoring cameras 1101, 1102, and 1103, and Reference numeral 1105 denotes a wide area network (WAN) such as an Internet line. Images captured by the monitoring cameras 1101 to 1103 can be displayed on a monitor or the like in the monitoring center 1104 via the WAN 1105. Although FIG. 11 shows an example in which there are three surveillance cameras, the number of in-vehicle cameras may be two or less, or four or more.
 上述したシステムにおいて、本実施例になる画像伝送装置は、例えば、監視カメラ1101~1103に搭載される。即ち、画像伝送装置は、各監視カメラ1101~1103のレンズを介して入力された入力画像に対して、後述する圧縮処理を行い、そして、圧縮処理された入力画像は、WAN1105へ出力される。他方、本実施例の画像受信装置は、例えば、監視センター1104に搭載される。なお、この監視センター1104は表示モニタを含む。即ち、画像受信装置は、圧縮処理された入力画像に対して、後述する伸長処理、認識処理、付加画像生成処理、伸長された入力画像及び付加画像の重畳処理等を行い、表示モニタに重畳処理後の画像を表示する。モニタに表示する画像は、伸長処理後の画像であってもよいし、付加画像であってもよい。 In the system described above, the image transmission apparatus according to the present embodiment is mounted on, for example, the monitoring cameras 1101 to 1103. That is, the image transmission apparatus performs a compression process, which will be described later, on the input image input via the lenses of the monitoring cameras 1101 to 1103, and the compressed input image is output to the WAN 1105. On the other hand, the image receiving apparatus of the present embodiment is mounted on the monitoring center 1104, for example. The monitoring center 1104 includes a display monitor. That is, the image receiving apparatus performs a decompression process, a recognition process, an additional image generation process, a superimposition process of the decompressed input image and the additional image, etc., which will be described later, on the compressed input image, and performs a superimposition process on the display monitor. Display the later image. The image displayed on the monitor may be an image after decompression processing or an additional image.
 続いて、図13においては、符号1201、1202、1203は、それぞれ、A地点、B地点、C地点に設置されたテレビ会議装置であり、1204はインターネット回線等のWANである。かかるシステムにおいて、各テレビ会議装置1201~1203のカメラで撮像された映像は、WAN1204を介して、各テレビ会議装置1201~1203のモニタ等に表示することが可能である。なお、この図12においては、テレビ会議システムが3つの場合の例を示しているが、しかしながら、テレビ会議システムの数は2つであっても、又は、4つ以上であってもよい。 Subsequently, in FIG. 13, reference numerals 1201, 1202, and 1203 are video conference apparatuses installed at points A, B, and C, respectively, and 1204 is a WAN such as an Internet line. In such a system, images captured by the cameras of the video conference apparatuses 1201 to 1203 can be displayed on the monitors of the video conference apparatuses 1201 to 1203 via the WAN 1204. Note that FIG. 12 shows an example in which there are three video conference systems. However, the number of video conference systems may be two, or four or more.
 かかるシステムにおいて、本実施例の画像伝送装置は、例えば、テレビ会議装置1201~1203の各カメラに搭載される。即ち、画像伝送装置は、各テレビ会議装置1201~1203のカメラのレンズを介して入力された入力画像に対して、後述する圧縮処理を行い、そして、圧縮処理された入力画像は、WAN1205へ出力される。他方、本実施例の画像受信装置は、例えば、各テレビ会議装置1201、1202、1203にも搭載される。即ち、各テレビ会議装置1201、1202、1203は、それ自体で表示モニタを含んでもよいし、又は、外付けでもよく、その場合、当該画像受信装置は、圧縮処理された入力画像に対して、後述する伸長処理、認識処理、付加画像生成処理、伸長された入力画像及び付加画像の重畳処理等を行い、表示モニタに重畳処理後の画像を表示する。なお、モニタに表示する画像は、伸長処理後の画像であってもよいし、付加画像であってもよい。 In such a system, the image transmission apparatus according to the present embodiment is mounted on each camera of the video conference apparatuses 1201 to 1203, for example. That is, the image transmission apparatus performs compression processing, which will be described later, on the input image input via the camera lens of each of the video conference apparatuses 1201 to 1203, and outputs the compressed input image to the WAN 1205. Is done. On the other hand, the image receiving apparatus according to the present embodiment is also installed in each of the video conference apparatuses 1201, 1202, and 1203, for example. That is, each of the video conference apparatuses 1201, 1202, and 1203 may include a display monitor by itself, or may be externally attached. In this case, the image receiving apparatus performs a compression process on an input image. A decompression process, a recognition process, an additional image generation process, a decompressed input image and an additional image are superimposed, and the image after the superimposition process is displayed on the display monitor. Note that the image displayed on the monitor may be an image after expansion processing or an additional image.
 更に、車載カメラシステムの一例を示す図14において、符号1301は、自動車を、1302、1303は、自動車1301に搭載される車載カメラを、1306は、車載カメラで圧縮処理された圧縮画像データの伸長処理及び認識処理を行うECU(Electronic Control Unit)を、1304は、車載カメラ1302、1303で撮像され、ECU1406で伸長処理された映像や認識処理の結果を反映したOSD(On Screen Display)などを表示するモニタを、そして、1305は、自動車1301内のローカルエリアネットワーク(LAN)を、それぞれ、示している。車載カメラ1302、1303で撮像された映像は、LAN1305を介して、ECU1306で伸長処理、認識処理、OSD付加などの画像処理を施された後、モニタ1304に表示することが可能である。なお、この図13においては、車載カメラが2つ搭載された例を示しているが、しかしながら、車載カメラの数は1つであっても、又は、3つ以上であってもよい。 Further, in FIG. 14 showing an example of the in-vehicle camera system, reference numeral 1301 denotes an automobile, 1302 and 1303 denote in-vehicle cameras mounted on the automobile 1301, and 1306 denotes decompression of compressed image data compressed by the in-vehicle camera. ECU (Electronic Control Unit) that performs processing and recognition processing, 1304 displays an image captured by the in- vehicle cameras 1302 and 1303, a video expanded by the ECU 1406, an OSD (On Screen Display) reflecting the recognition processing result, and the like Reference numeral 1305 denotes a local area network (LAN) in the automobile 1301. Images captured by the in- vehicle cameras 1302 and 1303 can be displayed on the monitor 1304 after being subjected to image processing such as expansion processing, recognition processing, and OSD addition by the ECU 1306 via the LAN 1305. Although FIG. 13 shows an example in which two in-vehicle cameras are mounted, however, the number of in-vehicle cameras may be one or three or more.
 かかるシステムにおいて、本実施例の画像伝送装置は、例えば、車載カメラ1302、1303に搭載される。即ち、画像伝送装置は、車載カメラ1302、1303のレンズを介して入力された入力画像に対して、後述する圧縮処理を行い、そして、圧縮処理された入力画像はLAN1305へ出力される。他方、本実施例の画像受信装置は、例えば、ECU1306に搭載される。なお、当該画像受信装置は、ECU1306とモニタ1304の両方の機能を兼ね備えたものであってもよい。そして、当該画像受信装置は、圧縮処理された入力画像に対して、後述する伸長処理、認識処理、付加画像生成処理、伸長された入力画像及び付加画像の重畳処理等を行い、処理後の画像はモニタ1304へ出力される。 In such a system, the image transmission apparatus of the present embodiment is mounted on, for example, in- vehicle cameras 1302 and 1303. That is, the image transmission apparatus performs a compression process, which will be described later, on the input image input via the lenses of the in- vehicle cameras 1302 and 1303, and the compressed input image is output to the LAN 1305. On the other hand, the image receiving apparatus of the present embodiment is mounted on the ECU 1306, for example. Note that the image receiving apparatus may have both functions of the ECU 1306 and the monitor 1304. Then, the image receiving apparatus performs a decompression process, a recognition process, an additional image generation process, a decompressed input image and an additional image superimposing process, etc., which will be described later, on the compressed input image, and the processed image Is output to the monitor 1304.
 なお、本発明は上記した実施例に限定されるものではなく、様々な変形例が含まれる。例えば、上記した実施例は本発明を分かりやすく説明するためにシステム全体を詳細に説明したものであり、必ずしも説明した全ての構成を備えるものに限定されるものではない。また、ある実施例の構成の一部を他の実施例の構成に置き換えることが可能であり、また、ある実施例の構成に他の実施例の構成を加えることも可能である。また、各実施例の構成の一部について、他の構成の追加・削除・置換をすることが可能である。 In addition, this invention is not limited to the above-mentioned Example, Various modifications are included. For example, the above-described embodiments are described in detail for the entire system in order to explain the present invention in an easy-to-understand manner, and are not necessarily limited to those having all the configurations described. Further, a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. Further, it is possible to add, delete, and replace other configurations for a part of the configuration of each embodiment.
 また、上記の各構成、機能、処理部、処理手段等は、それらの一部又は全部を、例えば集積回路で設計する等によりハードウェアで実現してもよい。また、上記の各構成、機能等は、プロセッサがそれぞれの機能を実現するプログラムを解釈し、実行することによりソフトウェアで実現してもよい。各機能を実現するプログラム、テーブル、ファイル等の情報は、メモリや、ハードディスク、SSD(Solid State Drive)等の記録装置、または、ICカード、SDカード、DVD等の記録媒体に置くことができる。 In addition, each of the above-described configurations, functions, processing units, processing means, and the like may be realized by hardware by designing a part or all of them with, for example, an integrated circuit. Each of the above-described configurations, functions, and the like may be realized by software by interpreting and executing a program that realizes each function by the processor. Information such as programs, tables, and files that realize each function can be stored in a memory, a hard disk, a recording device such as an SSD (Solid State Drive), or a recording medium such as an IC card, an SD card, or a DVD.
 また、制御線や情報線は説明上必要と考えられるものを示しており、製品上必ずしも全ての制御線や情報線を示しているとは限らない。実際には殆ど全ての構成が相互に接続されていると考えてもよい。 Also, the control lines and information lines indicate what is considered necessary for the explanation, and not all the control lines and information lines on the product are necessarily shown. Actually, it may be considered that almost all the components are connected to each other.
 1…画像伝送装置、2…画像受信装置、3…ケーブル部、10…画像入力部、11…画像処理部、12…入圧縮処理部、13…データ転送部、14…ユーザIF部、15…制御部、20…データ受信部、21…伸長部、22…付加画像生成部、23…画像重畳部、24…表示部、25…ユーザIF部、26…制御部 DESCRIPTION OF SYMBOLS 1 ... Image transmission apparatus, 2 ... Image receiving apparatus, 3 ... Cable part, 10 ... Image input part, 11 ... Image processing part, 12 ... Input compression process part, 13 ... Data transfer part, 14 ... User IF part, 15 ... Control unit 20 ... Data receiving unit 21 ... Expanding unit 22 ... Additional image generating unit 23 ... Image superimposing unit 24 ... Display unit 25 ... User IF unit 26 ... Control unit

Claims (11)

  1.  画像データが入力される画像入力部と、
     前記画像入力部に入力された画像から圧縮画像データを生成する圧縮処理部と、
     前記圧縮処理部で生成した圧縮画像データを伝送路に出力するデータ転送部とを有し、
     前記圧縮処理部は、圧縮遅延時間の異なる複数の圧縮画像データを生成し、
     前記データ転送部は、前記圧縮処理部で生成した圧縮遅延時間の異なる複数の圧縮画像データを多重して出力することを特徴とする画像伝送装置。
    An image input unit for inputting image data;
    A compression processing unit that generates compressed image data from the image input to the image input unit;
    A data transfer unit that outputs the compressed image data generated by the compression processing unit to a transmission path;
    The compression processing unit generates a plurality of compressed image data having different compression delay times,
    The image transmission apparatus, wherein the data transfer unit multiplexes and outputs a plurality of compressed image data having different compression delay times generated by the compression processing unit.
  2.  前記請求項1に記載した画像伝送装置において、前記圧縮処理部は、前記伝送路を介して接続される他の機器から送信される要求遅延時間に基づいて、生成する圧縮画像データの圧縮遅延時間を決定することを特徴とする画像伝送装置。 2. The image transmission apparatus according to claim 1, wherein the compression processing unit compresses the compressed image data to be generated based on a request delay time transmitted from another device connected via the transmission path. Determining an image transmission apparatus.
  3.  前記請求項2に記載した画像伝送装置において、更に、画像処理部を有しており、当該画像処理部は、圧縮処理部において圧縮遅延時間の短い圧縮処理を行う前に、画像処理を行うことを特徴とする画像伝送装置。 The image transmission apparatus according to claim 2, further comprising an image processing unit, wherein the image processing unit performs image processing before performing compression processing with a short compression delay time in the compression processing unit. An image transmission device.
  4.  前記請求項3に記載した画像伝送装置において、前記画像処理は、フレーム間引き処理、縮小処理、エッジフィルタ処理の少なくとも一つの処理を行うための手段を有していることを特徴とする画像伝送装置。 4. The image transmission device according to claim 3, wherein the image processing includes means for performing at least one of frame thinning processing, reduction processing, and edge filter processing. .
  5.  前記請求項2に記載した画像伝送装置において、圧縮遅延時間の短い圧縮処理後の圧縮画像データを、転送する画像データのブランキング期間において伝送することを特徴とする画像伝送装置。 3. The image transmission apparatus according to claim 2, wherein the compressed image data after compression processing with a short compression delay time is transmitted during a blanking period of the image data to be transferred.
  6.  多重した圧縮遅延時間の異なる複数の圧縮画像データを受信してそれぞれの圧縮画像データを抽出するデータ受信部と、
     前記データ受信部で受信した圧縮された複数の圧縮遅延時間の異なる圧縮画像データをそれぞれ伸長し、複数の画像データを生成する伸長処理部と、
     付加画像データを生成する付加画像生成部と、
     複数の画像データを重畳する画像重畳部とを有し、
     前記画像重畳部は、前記伸長処理部で生成した一部の画像データに、前記伸長処理部からの付加画像データ、又は、前記伸長処理部で生成した他の画像データを重畳して出力することを特徴とする画像受信装置。
    A data receiving unit that receives a plurality of compressed image data having different compression delay times and extracts each compressed image data; and
    A decompression processing unit that decompresses each of the compressed image data having different compression delay times received by the data receiving unit, and generates a plurality of image data;
    An additional image generation unit for generating additional image data;
    An image superimposing unit that superimposes a plurality of image data,
    The image superimposing unit superimposes additional image data from the decompression processing unit or other image data generated by the decompression processing unit on a part of the image data generated by the decompression processing unit and outputs the superimposed image data. An image receiving apparatus.
  7.  前記請求項6に記載した画像受信装置において、前記付加画像生成部は、前記伸長処理部で生成した一部の画像データに、前記伸長処理部で生成した複数の画像データのうちの圧縮遅延時間の短い画像データを重畳して出力することを特徴とする画像受信装置。 7. The image receiving apparatus according to claim 6, wherein the additional image generation unit adds a compression delay time among a plurality of image data generated by the expansion processing unit to a part of the image data generated by the expansion processing unit. An image receiving apparatus that superimposes and outputs short image data.
  8.  前記請求項7に記載した画像受信装置において、前記付加画像生成部は、画像特徴の抽出を行い、その結果をもとに付加画像データを生成し、前記画像特徴抽出として、物体検出、移動物検出、状態検出、エッジ抽出、予め規定した閾値範囲内の輝度を有する画素の抽出を含むことを特徴とする画像受信装置。 8. The image receiving device according to claim 7, wherein the additional image generation unit extracts an image feature, generates additional image data based on the result, and performs object detection, moving object as the image feature extraction. An image receiving apparatus comprising detection, state detection, edge extraction, and extraction of a pixel having a luminance within a predetermined threshold range.
  9.  それぞれ異なる地点に設置された複数の監視カメラと、
     前記複数の監視カメラが接続されたネットワークと、
     前記複数の監視カメラで撮像した画像を、前記ネットワークを介して受信する監視センターとを備えた監視カメラシステムであって、
     前記複監視カメラには、前記請求項1に記載した画像伝送装置が搭載され、
     前記監視センターには、前記請求項6に記載した画像受信装置が搭載されており、
     前記監視センターは、前記画像受信装置により生成された画像を表示する表示モニタを含んでいることを特徴とする監視カメラシステム。
    A plurality of surveillance cameras installed at different points,
    A network to which the plurality of surveillance cameras are connected;
    A monitoring camera system comprising a monitoring center that receives images captured by the plurality of monitoring cameras via the network,
    The multi-surveillance camera is equipped with the image transmission device according to claim 1,
    The monitoring center is equipped with the image receiving device according to claim 6,
    The surveillance camera system, wherein the surveillance center includes a display monitor for displaying an image generated by the image receiving device.
  10.  それぞれ異なる地点に設置された複数のテレビ会議装置と、
     前記複数のテレビ会議装置接続されたネットワークとを備えたテレビ会議システムであって、
     前記テレビ会議装置には、前記請求項1に記載した画像伝送装置と前記請求項6に記載した画像受信装置が搭載されており、
     前記テレビ会議装置は、他のテレビ会議装置の画像伝送装置から転送される画像を前記画像受信装置で受信し、当該生成した画像を表示する表示モニタを含んでいることを特徴とする監視カメラシステム。
    Multiple video conferencing devices installed at different points,
    A video conferencing system comprising a network connected to the plurality of video conferencing devices,
    The video conference device includes the image transmission device according to claim 1 and the image reception device according to claim 6.
    The video conference apparatus includes a display monitor for receiving an image transferred from an image transmission apparatus of another video conference apparatus by the image receiving apparatus and displaying the generated image. .
  11.  自動車の異なる部位に設置された複数の車載カメラと、
     前記自動車内に配置され、前記複数の車載カメラが接続されたネットワークと、
     前記複数の監視カメラで撮像した画像を前記ネットワークを介して受信する機能を備えたECUと、
     前記自動車内に配置され、前記ECUにより生成された画像を表示する表示モニタとを備えた車載カメラシステムであって、
     前記車載カメラには、前記請求項1に記載した画像伝送装置が搭載され、
     前記ECUには、前記請求項6に記載した画像受信装置が搭載されていることを特徴とする車載カメラシステム。
    Multiple in-vehicle cameras installed in different parts of the car,
    A network arranged in the automobile and connected to the plurality of in-vehicle cameras;
    An ECU having a function of receiving images taken by the plurality of monitoring cameras via the network;
    An in-vehicle camera system provided with a display monitor arranged in the automobile and displaying an image generated by the ECU,
    The in-vehicle camera is equipped with the image transmission device according to claim 1,
    An in-vehicle camera system in which the image receiving device according to claim 6 is mounted on the ECU.
PCT/JP2014/052925 2014-02-07 2014-02-07 Image transmission device, image reception device, and surveillance camera system, teleconference system, and vehicle-mounted camera system using same WO2015118664A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2014/052925 WO2015118664A1 (en) 2014-02-07 2014-02-07 Image transmission device, image reception device, and surveillance camera system, teleconference system, and vehicle-mounted camera system using same

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2014/052925 WO2015118664A1 (en) 2014-02-07 2014-02-07 Image transmission device, image reception device, and surveillance camera system, teleconference system, and vehicle-mounted camera system using same

Publications (1)

Publication Number Publication Date
WO2015118664A1 true WO2015118664A1 (en) 2015-08-13

Family

ID=53777494

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2014/052925 WO2015118664A1 (en) 2014-02-07 2014-02-07 Image transmission device, image reception device, and surveillance camera system, teleconference system, and vehicle-mounted camera system using same

Country Status (1)

Country Link
WO (1) WO2015118664A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018123078A1 (en) * 2016-12-27 2018-07-05 株式会社Nexpoint Monitoring camera system
CN110050462A (en) * 2016-12-22 2019-07-23 康奈可关精株式会社 Image display control apparatus
WO2023136211A1 (en) * 2022-01-14 2023-07-20 マクセル株式会社 Imaging lens system, camera module, vehicle-mounted system, and mobile object

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007336260A (en) * 2006-06-15 2007-12-27 Matsushita Electric Ind Co Ltd Video monitoring device
JP2008181196A (en) * 2007-01-23 2008-08-07 Nikon Corp Image processing apparatus, electronic camera, and image processing program
JP2010288186A (en) * 2009-06-15 2010-12-24 Olympus Corp Image transmitting apparatus and image receiving apparatus
JP2013535921A (en) * 2010-08-06 2013-09-12 トムソン ライセンシング Apparatus and method for receiving signals
JP2013186834A (en) * 2012-03-09 2013-09-19 Canon Inc Image processing apparatus, control method for image processing apparatus, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007336260A (en) * 2006-06-15 2007-12-27 Matsushita Electric Ind Co Ltd Video monitoring device
JP2008181196A (en) * 2007-01-23 2008-08-07 Nikon Corp Image processing apparatus, electronic camera, and image processing program
JP2010288186A (en) * 2009-06-15 2010-12-24 Olympus Corp Image transmitting apparatus and image receiving apparatus
JP2013535921A (en) * 2010-08-06 2013-09-12 トムソン ライセンシング Apparatus and method for receiving signals
JP2013186834A (en) * 2012-03-09 2013-09-19 Canon Inc Image processing apparatus, control method for image processing apparatus, and program

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110050462A (en) * 2016-12-22 2019-07-23 康奈可关精株式会社 Image display control apparatus
WO2018123078A1 (en) * 2016-12-27 2018-07-05 株式会社Nexpoint Monitoring camera system
JP2018107655A (en) * 2016-12-27 2018-07-05 株式会社Nexpoint Monitoring camera system
WO2023136211A1 (en) * 2022-01-14 2023-07-20 マクセル株式会社 Imaging lens system, camera module, vehicle-mounted system, and mobile object

Similar Documents

Publication Publication Date Title
US9462296B2 (en) Method and system for motion-compensated frame-rate up-conversion for both compressed and decompressed video bitstreams
US10721453B2 (en) Compatible stereoscopic video delivery
US8416852B2 (en) Video signal coding system and method of coding video signal for network transmission, video output apparatus, and signal conversion apparatus
US8204104B2 (en) Frame rate conversion system, method of converting frame rate, transmitter, and receiver
EP2157791A2 (en) Method and system for motion-compensated frame-rate up-conversion for both compressed and decompressed video bitstreams
US9438849B2 (en) Systems and methods for transmitting video frames
US20100177161A1 (en) Multiplexed stereoscopic video transmission
US8724912B2 (en) Method, apparatus, and program for compressing images, and method, apparatus, and program for decompressing images
US20180184119A1 (en) Method and device for displaying a sequence of pictures
US20100045810A1 (en) Video Signal Processing System and Method Thereof
US12015763B2 (en) Video encoding method, video decoding method, and related apparatuses
WO2015118664A1 (en) Image transmission device, image reception device, and surveillance camera system, teleconference system, and vehicle-mounted camera system using same
JP2013187769A (en) Encoder
EP2312859A2 (en) Method and system for communicating 3D video via a wireless communication link
JP5872171B2 (en) Camera system
US20080117970A1 (en) Method and apparatus for encoding and decoding rgb image
WO2012147791A1 (en) Image receiving device and image receiving method
JP3577585B2 (en) Image transmission device, image transmission method, and computer-readable recording medium recording image transmission program
US11849127B2 (en) Video encoding method, video decoding method, and related apparatuses
US20080094500A1 (en) Frame filter
Misu et al. Real-time video coding system for up to 4K 120P videos with spatio-temporal format conversion
JP2006311327A (en) Image signal decoding device
US9781438B2 (en) Standardized hot-pluggable transceiving unit with signal encoding or decoding capabilities
US20040179136A1 (en) Image transmission system and method thereof
WO2012147786A1 (en) Image transmission device and image transmission method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14881722

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 14881722

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP