US10796617B2 - Device, method and system for processing an image data stream - Google Patents

Device, method and system for processing an image data stream Download PDF

Info

Publication number
US10796617B2
US10796617B2 US13/916,446 US201313916446A US10796617B2 US 10796617 B2 US10796617 B2 US 10796617B2 US 201313916446 A US201313916446 A US 201313916446A US 10796617 B2 US10796617 B2 US 10796617B2
Authority
US
United States
Prior art keywords
data stream
image data
processing unit
resolution
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US13/916,446
Other versions
US20140368514A1 (en
Inventor
Andre' ROGER
Romain Ygnace
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Infineon Technologies AG
Original Assignee
Infineon Technologies AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Infineon Technologies AG filed Critical Infineon Technologies AG
Priority to US13/916,446 priority Critical patent/US10796617B2/en
Assigned to INFMEON TECHNOLOGIES AG reassignment INFMEON TECHNOLOGIES AG ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ROGER, ANDRE', YGNACE, ROMAIN
Priority to JP2014119183A priority patent/JP5902234B2/en
Priority to KR1020140070922A priority patent/KR101642181B1/en
Priority to DE102014008893.6A priority patent/DE102014008893A1/en
Priority to CN201410262419.4A priority patent/CN104244014B/en
Publication of US20140368514A1 publication Critical patent/US20140368514A1/en
Application granted granted Critical
Publication of US10796617B2 publication Critical patent/US10796617B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/003Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • G09G5/005Adapting incoming signals to the display format of the display terminal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/80Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/60Memory management
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/02Handling of images in compressed format, e.g. JPEG, MPEG
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/04Changes in size, position or resolution of an image
    • G09G2340/0407Resolution change, inclusive of the use of different resolutions for different screen areas
    • G09G2340/0428Gradation resolution change
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/02Graphics controller able to handle multiple formats, e.g. input or output formats
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/06Use of more than one graphics processor to process data before displaying to one or more screens

Definitions

  • Embodiments relate to an improvement of (pre-)processing of image data.
  • Euro-NCAP defined that specific Advanced Driver Assistance System (ADAS) features and test scenarios to be used in cars produced from 2016 onward in order to obtain a five star rating.
  • ADAS Advanced Driver Assistance System
  • Different types of systems are used that are, e.g., camera-based or radar-based, which are intended to detect risk situations, notify a driver and in case the driver does not intervene in time, autonomously initiate an action, e.g., slow down a car to avoid a collision.
  • Camera-based systems in particular enable forward collision warning, lane departure and high beam assist.
  • high dynamic sensors may be used, which in turn results in an increasing demand for memory and computing performance.
  • camera-based systems are intended to become less expensive so that more cars can be equipped with such safety enhancing feature.
  • FIG. 1 shows an exemplary schematic as it may be utilized in an ADAS application.
  • FIG. 2 shows an exemplary schematic of an image processing architecture.
  • FIG. 3 shows an exemplary architecture comprising several pixelpaths that are arranged in parallel.
  • FIG. 4 shows an exemplary architecture comprising the pixelpaths of FIG. 3 , wherein the data streams are stored in a memory, in particular realized as an embedded RAM.
  • ADAS Advanced Driver Assistance System
  • Image processing for ADAS may use several processing paths, wherein at least two of the processing paths may be arranged in parallel to each other.
  • Image memory may comprise random access memory (RAM), which is regarded a key factor for the costs of ADAS devices.
  • RAM random access memory
  • a first embodiment relates to a device for processing an image data stream, comprising a first processing unit and a second processing unit for receiving the image data stream; wherein the first processing unit is arranged for providing a first data stream, wherein the first data stream has a reduced bandwidth compared to the image data stream; and wherein the second processing unit is arranged for providing a second data stream, wherein the second data stream has a reduced bandwidth compared to the image data stream.
  • An image data stream may be any data provided by an image recording device, e.g., a camera.
  • the image data stream comprises images or pictures taken at a particular rate.
  • a data rate of the image data streams depends on the size of the images taken and the amount of the, i.e. how often the images are recorded per time interval.
  • the image data stream may be provided by at least one camera.
  • the image data stream may have been pre-processed by a camera interface, wherein such pre-processing may in particular comprise de-compression.
  • the first processing unit and the second processing unit may be physically separate processing entities that may in particular deployed on a single chip or on several chips.
  • Each processing unit builds a processing path also referred to as pixelpath.
  • There may be two or more processing units processing the image data stream.
  • the compression rate of the first processing unit and the compression rate of the second processing unit may be adjusted or subject to configuration.
  • a second embodiment relates to a device comprising a first processing unit coupled to an input image data stream, wherein the first processing unit is arranged for providing a first data stream, wherein the first data stream has a reduced bandwidth compared to the input image data stream; comprising a second processing unit coupled to the input image data stream, wherein the second processing unit is arranged for providing a second data stream, wherein the second data stream has a reduced bandwidth compared to the image data stream; and comprising a memory for at least partially storing the first data stream and the second data stream.
  • a forth embodiment is directed to a system for processing an image data stream comprising: means for providing a first data stream, wherein the first data stream has a reduced bandwidth compared to the image data stream; and means for providing a second data stream, wherein the second data stream has a reduced bandwidth compared to the image data stream.
  • FIG. 1 shows an exemplary schematic as it may be utilized in an ADAS application.
  • a video stream 101 also referred to as image stream or (image) data stream
  • image stream or (image) data stream with a data rate amounting to 120 MByte/s is fed to a camera interface 102 , which may decompress this video stream 101 and feed the decompressed stream to a low precision image acquisition unit 103 and to a full precision image acquisition unit 104 .
  • the low precision image acquisition unit 103 processes the input stream and provides a stream 109 comprising the full scenery (i.e. 100% of the image) at a reduced resolution.
  • the stream 109 is stored in a memory 105 .
  • a next stage may utilize the data of this stream 109 stored in the memory 105 via a stream 107 at a data rate amounting to, e.g., up to 15 MByte/s.
  • the full precision image acquisition unit 104 processes the input stream and provides a stream 110 comprising a reduced scenery (e.g., 30% or 50% of the scenery of the image contained in the stream 101 ) at its full resolution.
  • the stream 110 is stored in a memory 106 .
  • a next stage may utilize the data of this stream 110 stored in the memory 106 via a stream 108 at a data rate amounting to, e.g., up to 60 MByte/s.
  • the example shown in FIG. 1 can be used in a single chip ADAS device comprising two acquisition paths: one of low resolution covering the whole image or scenery of the input stream 101 and another of high resolution covering a portion of the whole image or scenery of the input stream 101 .
  • the single chip can be a single substrate, die, wafer or silicon.
  • the approach presented herein in particular allows reducing the load of a processor (e.g., CPU or signaling processor) during an image pre-processing utilizing a reduced amount of memory (in particular RAM). This may be useful for camera-based ADAS applications.
  • the solution is in particular beneficial as upcoming generations of cameras have even higher resolution sensors thus providing image streams at an even increased data rate.
  • the solution presented in particular addresses increasing RAM requirements in ADAS applications.
  • the amount of memory required can be significantly reduced (considering the same data rate of video streams).
  • the additional amount of memory required can be reduced.
  • reducing the amount of RAM required also implies that an area on the silicon for a single chip solution can be reduced and may either result in a smaller single chip or in a single chip of the same size with an increased performance. Both effects are beneficial and cost-efficient.
  • FIG. 2 shows an exemplary schematic of an image processing architecture.
  • a (e.g., compressed) data stream 201 at a data rate of 120 MByte/s is fed to a camera interface 202 , which decompresses the data stream 201 and supplies a (decompressed) data stream 206 at a data rate up to 200 MByte/s to a processing unit 203 .
  • the data stream 201 may be a compressed or a non-compressed data stream.
  • the processing unit 203 may provide at least one of the following:
  • the portion of the image or scenery may refer, e.g., to a part of an image that is above or below the horizon. This may in particular be useful for pictures or videos taken of a road ahead, because the road may often be found in an area below the horizon. Extraction mechanisms and imaging processing algorithms could be used to determine said portion of the image or scenery.
  • Resolution in this regard may refer to the number of bits used per pixel (also referred to as pixel precision) and/or to the number of pixel used for the image (of a video): For example, a video stream having a resolution of 1280 ⁇ 720 pixels with each pixel having 16 bit may be reduced to, e.g.,
  • the processing unit 203 supplies several data streams 204 , 205 for further processing.
  • the combined data rate (amounting to, e.g., 20 MByte/s to 30 MByte/s) of the data streams 204 , 205 may be significantly reduced compared to the data rate of the stream 206 .
  • Each of the data streams 204 , 205 may preferably formed of a given type of compressed video according to the possibilities of the processing unit 203 .
  • high precision algorithms based on high precision video can be used so that the output provided by the processing unit 203 can be substantially reduced compared to the data rate of its input (i.e. the data stream 206 ).
  • the solution may in particular provide hardware that can be used for image processing in ADASs with multiple data flows and high data compression each. Also, high precision processing can be used for an improved data reduction.
  • Such hardware may be organized as physically separated pixelpaths, wherein each pixelpath enables preprocessing of video data. Based on such hardware architecture, subsequent processing stages may cope with a reduced amount of memory (e.g., embedded RAM).
  • memory e.g., embedded RAM
  • FIG. 3 shows an exemplary architecture comprising several pixelpaths that are arranged in parallel.
  • a compressed data stream 301 is fed to a camera interface 302 , which decompresses the data stream 301 and supplies a (decompressed) data stream 306 (full resolution, high precision pixel) to processing units 303 to 305 , wherein each of the processing units 303 to 305 is part of one pixelpath.
  • the processing units 303 to 305 are arranged for processing images, in particular by using hardware accelerators in a way that allows reducing RAM requirements in subsequent processing stages.
  • Each of the processing units 303 to 305 may be arranged to provide at least one of the following:
  • each of the processing units 303 to 305 provides a data stream 307 to 309 for further processing.
  • the data streams 307 to 309 may together have a significantly lower data rate as the data stream 306 .
  • the data streams 307 to 309 may in particular be of different resolution (number of pixels and/or pixel resolution) and/or comprise different portions (details) of the input data stream 306 .
  • each of the data streams 307 to 309 may be subject to at least one filtering operation, e.g., in order to determine an infrared content in the image or in the portion of the image.
  • the solution presented can be implemented in or be part of an ADAS-related image processor.
  • This image processor may be able to significantly reduce the output data rate and size so that computing requirements and memory requirements are optimized without or with limited compromise regarding output quality and precision of information passed to a general purpose image processor (e.g. the controller 402 of FIG. 4 , see below).
  • the image processor is able to provide several representations of a scene acquired by at least one sensor so that the memory required to store such representations can be significantly reduced.
  • Each of the processing units 303 to 305 can be part of such image processor providing data for the general purpose image processor. Multiple representations of a scenery based on the data stream 306 may be generated by the processing units 303 to 305 . Each of the processing units 303 to 305 may use ADAS algorithms run by the general purpose image processor. An acquisition rate of each representation may optionally be subject to configuration.
  • the data streams 307 to 309 may be stored in a common shared memory. As an option, each of the data streams 307 to 309 may be stored in different portions of a (shared) memory.
  • At least one data stream 307 , 308 or 309 has an output rate that is different from at least one other data stream.
  • a data stream 307 , 308 or 309 may have multiple output rates, i.e., it may provide a compressed image at several moments in time, e.g., every n-th and every m-th clock cycle. In other words, the compressed image may be determined and/or stored once per time interval or several times within such time interval.
  • a algorithm utilized by at least one of the processing units 303 to 305 may comprise at least one of the following:
  • the processing unit 303 may generate a compressed image representing the entire scenery as supplied by the data stream 306 , but with a reduced number of pixels and with a pixel precision of 16 bits per pixel to extract a position of a horizon in the data stream 306 and to analyze an image brightness.
  • the processing unit 304 may generate a compressed image based using a blur filter (based on the entire scenery with a reduced amount of pixels and a reduced pixel precision).
  • the processing unit 305 may generate a compressed image based on the use of configurable convolution filter(s) for edge detection (e.g., for lane detection and/or collision warning purposes).
  • the processing units 303 to 305 generate these images at same or at different time intervals resulting in data streams 307 to 309 with same or different data rates (also depending on the size of each image of the respective data stream 307 to 309 ).
  • the solution suggested allows for easy software partitioning between generic image compression focused on pre-processing and image post-processing by the general purpose processor (next stage processing, see, e.g., controller 402 in FIG. 4 ).
  • processing in particular by hardware accelerators, can be done line-based or image-based.
  • line-based processing can be used to reduce the memory required for buffering.
  • each of the processing units 303 to 305 comprises a lossless down-sampling stage.
  • the at least one of the processing units 303 to 305 may provide an output at different image rates also having a significant impact on the data rate of each of the data streams 307 to 309 .
  • the processing units 304 and 305 may process the data stream 306 and provide an output at a certain clock rate, i.e. after time-intervals of a length t.
  • the processing unit 303 may conduct a different operation, which may, e.g., be complex or required less often, and it may provide the result of its processing at a different clock rate, e.g., at a time interval amounting to 2t, in this example every second time interval t compared to the results provided by the processing units 304 and 305 .
  • the workload of processing the data stream 306 towards various objectives can be efficiently distributed among the pixelpaths, i.e. the processing units 303 to 305 .
  • the examples presented use pixelpaths to provide efficient pre-processing according to use case requirements.
  • the data rate of the input data stream 306 is substantially reduced by the combination of the data streams 307 to 309 supplied by the processing units 303 to 305 .
  • the amount of memory e.g., surface on a chip used for RAM
  • higher data rates e.g., more pixelpaths
  • This is in particular useful for single-chip solutions utilizing embedded RAM.
  • FIG. 4 shows an exemplary architecture comprising the pixelpaths of FIG. 3 , wherein the data streams 307 to 309 are stored in a memory 401 (in particular an embedded RAM).
  • a memory 401 in particular an embedded RAM.
  • An interface 403 and/or an interface 404 are provided to obtain information of the data streams 307 to 309 from the memory 401 and to convert it in a format that is suitable and/or efficient for a controller 402 to process.
  • the controller 402 and the interfaces 403 , 404 are coupled via a bus 405 .
  • the controller 402 may be or it may comprise a central processing unit, a signal processor, a microcontroller or the like.
  • the data streams 307 to 309 stored in the memory 401 may have a pixel precision amounting to 8 bits, but the controller 402 may operate efficiently on 16 bits rather than 8 bits.
  • the interfaces 403 and 404 may provide a conversion from 8 bits to 16 bits for the controller 402 being able to process the data streams 307 to 309 (or portions thereof) in an efficient manner.
  • the approach of conversion has the advantage that the memory 401 can still be used efficiently, because 8 bit image data is transformed to 16 bit image data by the interfaces 403 , 404 and no such 16 bit image data need to be stored in the memory 401 .
  • the interfaces 403 and 404 can be designed such that pixel information can be stored per minimum required precision and it may be transformed to a format suitable for the controller 402 .
  • At least one such interface can be provided for conversion purposes.
  • several interfaces 403 and 404 can be used.
  • the interface 403 is arranged to read pixels in the format stored in memory 401 and passes these pixels to the controller 402 without any conversion.
  • the interface 404 is arranged to read pixels stored in the memory 401 , converts them in the format suitable for the controller 402 and forwards the converted pixels to the controller 402 .
  • the conversion may comprise an up conversion, e.g., from 8 bits to 16 bits or from 4 bits to 16 bits.
  • the conversion may also comprise a down-conversion, e.g., from 32 bits to 16 bits or the like.
  • a sensor comprising 1280 ⁇ 720 pixels may require 1.8 MByte RAM buffer.
  • the RAM required may be reduced to 690 KByte, which saves more than 1.1 MByte of RAM. This corresponds to an area of 4 square millimeter silicon area in 40 nm and results in a reduction to 1 ⁇ 3.
  • Using hardware accelerators beneficially allows for high energy efficiency, which may also increase the life period of attached components like, e.g., (signal) processors.
  • the solution presented further enables using the same silicon to cover radar and image processing as 1 MByte of RAM may suffice for processing radar and image data.
  • the solution presented may be applicable in the field of personal transportation or in industrial applications. It is in particular useful when implemented in moving entities, e.g., vehicles of any kind, in particular where it is of interest to detect and/or monitor their surroundings and/or determine particularities of their surroundings.
  • the architecture suggested could be used in combination with an ADAS preprocessor allowing a significant memory reduction. This can in particular be beneficial with regard to at least one of the following applications: high beam assist, lane departure warning, forward collision warning.
  • the approach can also be utilized for optimizing or reducing power consumption, because of its low memory requirements.
  • a device for processing an image data stream is suggested,
  • the first processing unit is arranged to provide at least one of the following:
  • the first compression rate results in a lossless compression or in a lossy compression of the image data stream.
  • the first resolution comprises at least one of the following:
  • the second processing unit is arranged to provide at least one of the following:
  • the second compression rate results in a lossless compression or in a lossy compression of the image data stream.
  • the second resolution comprises at least one of the following:
  • the portion of the scenery of the image data stream is determined via image processing, in particular region extraction.
  • the filtering operation comprises at least one of the following:
  • the first data stream is determined based on the image data stream by processing images of the image data stream line by line.
  • the second data stream is determined based on the image data stream by processing images of the image data stream line by line.
  • an acquisition rate of the first processing unit and an acquisition rate of the second processing unit is each configurable.
  • the first data stream has a different number of images per time interval compared to the image data stream.
  • the second data stream has a different number of images per time interval compared to the image data stream.
  • the device comprises a memory for at least partially storing the first data stream and the second data stream.
  • that said memory is a shared memory, wherein the first processing unit is associated with a first portion of the shared memory and the second processing unit is associated with a second portion of the shared memory.
  • said memory comprises a random access memory.
  • the first processing unit, the second processing unit and the random access memory are arranged on a single chip.
  • the device comprises at least one interface that is arranged for accessing the memory and for converting data of the first data stream and of the second data stream into a predefined format.
  • the predefined format comprises a number of bits used by a controller that is arranged for processing images from the first data stream and the second data stream.
  • the components of the image pre-processing stage are arranged on a single chip and wherein the memory is a random access memory arranged on the single chip.
  • a method for processing an image data stream comprising the following steps:
  • the method comprising the step:
  • the step of providing the first data stream comprises at least one of the following:
  • the first resolution comprises at least one of the following:
  • the step of providing the second data stream comprises at least one of the following:
  • the second resolution comprises at least one of the following:
  • a system for processing an image data stream comprising:

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Image Processing (AREA)

Abstract

An implementation relates to a device for processing an image data stream. The device may include a first processing unit and a second processing unit for receiving the image data stream. The first processing unit may be arranged for providing a first data stream, the first data stream has a reduced bandwidth compared to the image data stream. The second processing unit may arranged for providing a second data stream, the second data stream has a reduced bandwidth compared to the image data stream.

Description

BACKGROUND OF THE INVENTION
Embodiments relate to an improvement of (pre-)processing of image data.
In order to enhance road safety and reduce accidents, Euro-NCAP defined that specific Advanced Driver Assistance System (ADAS) features and test scenarios to be used in cars produced from 2016 onward in order to obtain a five star rating.
Different types of systems are used that are, e.g., camera-based or radar-based, which are intended to detect risk situations, notify a driver and in case the driver does not intervene in time, autonomously initiate an action, e.g., slow down a car to avoid a collision.
Camera-based systems in particular enable forward collision warning, lane departure and high beam assist.
In order to provide a desired degree of safety even under varying and different light conditions, high dynamic sensors may be used, which in turn results in an increasing demand for memory and computing performance. On the other hand, camera-based systems are intended to become less expensive so that more cars can be equipped with such safety enhancing feature.
BRIEF DESCRIPTION OF THE DRAWINGS
Embodiments are shown and illustrated with reference to the drawings. The drawings serve to illustrate the basic principle, so that only aspects necessary for understanding the basic principle are illustrated. The drawings are not to scale. In the drawings the same reference characters denote like features.
FIG. 1 shows an exemplary schematic as it may be utilized in an ADAS application.
FIG. 2 shows an exemplary schematic of an image processing architecture.
FIG. 3 shows an exemplary architecture comprising several pixelpaths that are arranged in parallel.
FIG. 4 shows an exemplary architecture comprising the pixelpaths of FIG. 3, wherein the data streams are stored in a memory, in particular realized as an embedded RAM.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
The approach presented in particular provides an image preprocessor architecture with a low demand for memory. The solution can in particular be utilized in an Advanced Driver Assistance System (ADAS).
Image processing for ADAS may use several processing paths, wherein at least two of the processing paths may be arranged in parallel to each other. Image memory may comprise random access memory (RAM), which is regarded a key factor for the costs of ADAS devices.
A first embodiment relates to a device for processing an image data stream, comprising a first processing unit and a second processing unit for receiving the image data stream; wherein the first processing unit is arranged for providing a first data stream, wherein the first data stream has a reduced bandwidth compared to the image data stream; and wherein the second processing unit is arranged for providing a second data stream, wherein the second data stream has a reduced bandwidth compared to the image data stream.
An image data stream may be any data provided by an image recording device, e.g., a camera. The image data stream comprises images or pictures taken at a particular rate. A data rate of the image data streams depends on the size of the images taken and the amount of the, i.e. how often the images are recorded per time interval. The image data stream may be provided by at least one camera. The image data stream may have been pre-processed by a camera interface, wherein such pre-processing may in particular comprise de-compression.
The first processing unit and the second processing unit may be physically separate processing entities that may in particular deployed on a single chip or on several chips. Each processing unit builds a processing path also referred to as pixelpath. There may be two or more processing units processing the image data stream.
The compression rate of the first processing unit and the compression rate of the second processing unit may be adjusted or subject to configuration.
A second embodiment relates to a device comprising a first processing unit coupled to an input image data stream, wherein the first processing unit is arranged for providing a first data stream, wherein the first data stream has a reduced bandwidth compared to the input image data stream; comprising a second processing unit coupled to the input image data stream, wherein the second processing unit is arranged for providing a second data stream, wherein the second data stream has a reduced bandwidth compared to the image data stream; and comprising a memory for at least partially storing the first data stream and the second data stream.
A third embodiment relates to a method for processing an image data stream comprising the following steps:
    • providing a first data stream, wherein the first data stream has a reduced bandwidth compared to the image data stream;
    • providing a second data stream, wherein the second data stream has a reduced bandwidth compared to the image data stream;
    • wherein the first data stream is provided by a first processing unit and the second data stream is provided by a second processing unit.
A forth embodiment is directed to a system for processing an image data stream comprising: means for providing a first data stream, wherein the first data stream has a reduced bandwidth compared to the image data stream; and means for providing a second data stream, wherein the second data stream has a reduced bandwidth compared to the image data stream.
FIG. 1 shows an exemplary schematic as it may be utilized in an ADAS application. A video stream 101 (also referred to as image stream or (image) data stream) with a data rate amounting to 120 MByte/s is fed to a camera interface 102, which may decompress this video stream 101 and feed the decompressed stream to a low precision image acquisition unit 103 and to a full precision image acquisition unit 104.
The low precision image acquisition unit 103 processes the input stream and provides a stream 109 comprising the full scenery (i.e. 100% of the image) at a reduced resolution. The stream 109 is stored in a memory 105. A next stage may utilize the data of this stream 109 stored in the memory 105 via a stream 107 at a data rate amounting to, e.g., up to 15 MByte/s.
The full precision image acquisition unit 104 processes the input stream and provides a stream 110 comprising a reduced scenery (e.g., 30% or 50% of the scenery of the image contained in the stream 101) at its full resolution. The stream 110 is stored in a memory 106. A next stage may utilize the data of this stream 110 stored in the memory 106 via a stream 108 at a data rate amounting to, e.g., up to 60 MByte/s.
Hence, the example shown in FIG. 1 can be used in a single chip ADAS device comprising two acquisition paths: one of low resolution covering the whole image or scenery of the input stream 101 and another of high resolution covering a portion of the whole image or scenery of the input stream 101.
It is noted that the single chip can be a single substrate, die, wafer or silicon.
The approach presented herein in particular allows reducing the load of a processor (e.g., CPU or signaling processor) during an image pre-processing utilizing a reduced amount of memory (in particular RAM). This may be useful for camera-based ADAS applications. The solution is in particular beneficial as upcoming generations of cameras have even higher resolution sensors thus providing image streams at an even increased data rate.
Hence, the solution presented in particular addresses increasing RAM requirements in ADAS applications. Pursuant to the approach presented, the amount of memory required can be significantly reduced (considering the same data rate of video streams). In case of video streams with increasing data rates, the additional amount of memory required can be reduced. It is noted that reducing the amount of RAM required also implies that an area on the silicon for a single chip solution can be reduced and may either result in a smaller single chip or in a single chip of the same size with an increased performance. Both effects are beneficial and cost-efficient.
FIG. 2 shows an exemplary schematic of an image processing architecture. A (e.g., compressed) data stream 201 at a data rate of 120 MByte/s is fed to a camera interface 202, which decompresses the data stream 201 and supplies a (decompressed) data stream 206 at a data rate up to 200 MByte/s to a processing unit 203. It is noted that the data stream 201 may be a compressed or a non-compressed data stream.
The processing unit 203 may provide at least one of the following:
    • at least two compressed data streams utilizing different compression rates,
    • at least one of the compressed data streams is of high resolution;
    • at least one of the compressed data streams is of reduced (e.g., low) resolution;
    • at least one of the compressed data streams comprises the full image or scenery of the data stream 206;
    • at least one of the compressed data streams comprises a portion of the image or scenery of the data stream 206;
    • at least one filtered compressed data stream.
The portion of the image or scenery may refer, e.g., to a part of an image that is above or below the horizon. This may in particular be useful for pictures or videos taken of a road ahead, because the road may often be found in an area below the horizon. Extraction mechanisms and imaging processing algorithms could be used to determine said portion of the image or scenery.
Resolution in this regard may refer to the number of bits used per pixel (also referred to as pixel precision) and/or to the number of pixel used for the image (of a video): For example, a video stream having a resolution of 1280×720 pixels with each pixel having 16 bit may be reduced to, e.g.,
    • 1024×576 pixels with each pixel having 16 bit, i.e. a smaller amount of pixels and/or
    • 1280×720 pixels with each pixel having 8 bit, i.e. a smaller amount of pixel precision.
Of course, the figures above are merely examples; other resolutions as well as aspect ratios can be used accordingly. Also, the reduction of the amount of pixels and the reduction of the pixel precision can be combined, depending on a particular use case scenario.
As a result, the processing unit 203 supplies several data streams 204, 205 for further processing. The combined data rate (amounting to, e.g., 20 MByte/s to 30 MByte/s) of the data streams 204, 205 may be significantly reduced compared to the data rate of the stream 206. Each of the data streams 204, 205 may preferably formed of a given type of compressed video according to the possibilities of the processing unit 203.
For example, high precision algorithms based on high precision video can be used so that the output provided by the processing unit 203 can be substantially reduced compared to the data rate of its input (i.e. the data stream 206).
Hence, the solution may in particular provide hardware that can be used for image processing in ADASs with multiple data flows and high data compression each. Also, high precision processing can be used for an improved data reduction.
Such hardware may be organized as physically separated pixelpaths, wherein each pixelpath enables preprocessing of video data. Based on such hardware architecture, subsequent processing stages may cope with a reduced amount of memory (e.g., embedded RAM).
FIG. 3 shows an exemplary architecture comprising several pixelpaths that are arranged in parallel. A compressed data stream 301 is fed to a camera interface 302, which decompresses the data stream 301 and supplies a (decompressed) data stream 306 (full resolution, high precision pixel) to processing units 303 to 305, wherein each of the processing units 303 to 305 is part of one pixelpath.
The processing units 303 to 305 are arranged for processing images, in particular by using hardware accelerators in a way that allows reducing RAM requirements in subsequent processing stages.
Each of the processing units 303 to 305 may be arranged to provide at least one of the following:
    • provide a data stream at a particular compression rate, wherein the compression rates among the processing units 303 to 305 may at least partially be identical or they may be at least partially different;
    • a compressed data stream of high resolution;
    • a compressed data stream of reduced (e.g., low) resolution;
    • a compressed data stream comprising the full image or scenery of the data stream 306;
    • a compressed data stream comprising a portion (e.g., detail) of the image or scenery of the data stream 306;
    • a filtering operation.
Hence, each of the processing units 303 to 305 provides a data stream 307 to 309 for further processing. Because of their compression, the data streams 307 to 309 may together have a significantly lower data rate as the data stream 306. The data streams 307 to 309 may in particular be of different resolution (number of pixels and/or pixel resolution) and/or comprise different portions (details) of the input data stream 306. In addition, each of the data streams 307 to 309 may be subject to at least one filtering operation, e.g., in order to determine an infrared content in the image or in the portion of the image. These possibilities can be combined in order to supply appropriate data streams 307 to 309 that could utilize a memory in an efficient manner and allow subsequent processing stages to utilize the information obtained from the data stream 306 in a time and memory efficient way.
The solution presented can be implemented in or be part of an ADAS-related image processor. This image processor may be able to significantly reduce the output data rate and size so that computing requirements and memory requirements are optimized without or with limited compromise regarding output quality and precision of information passed to a general purpose image processor (e.g. the controller 402 of FIG. 4, see below). The image processor is able to provide several representations of a scene acquired by at least one sensor so that the memory required to store such representations can be significantly reduced.
Each of the processing units 303 to 305 can be part of such image processor providing data for the general purpose image processor. Multiple representations of a scenery based on the data stream 306 may be generated by the processing units 303 to 305. Each of the processing units 303 to 305 may use ADAS algorithms run by the general purpose image processor. An acquisition rate of each representation may optionally be subject to configuration.
The data streams 307 to 309 may be stored in a common shared memory. As an option, each of the data streams 307 to 309 may be stored in different portions of a (shared) memory.
It is another option that at least one data stream 307, 308 or 309 has an output rate that is different from at least one other data stream. In addition, a data stream 307, 308 or 309 may have multiple output rates, i.e., it may provide a compressed image at several moments in time, e.g., every n-th and every m-th clock cycle. In other words, the compressed image may be determined and/or stored once per time interval or several times within such time interval.
A algorithm utilized by at least one of the processing units 303 to 305 may comprise at least one of the following:
    • a compression of the incoming data stream;
    • region extraction;
    • lossless down-sampling;
    • filtering, in particular convolution filtering; and
    • pixel size reduction.
These algorithms are merely examples, the algorithms may be subject to configuration, e.g., by a user or in an automated manner. Accordingly, other algorithms for image processing and/or filtering can be utilized. For example, known image compression algorithms like MPEG or H.26x could be utilized by at least one of the processing units 303 to 305.
According to an example, the processing unit 303 may generate a compressed image representing the entire scenery as supplied by the data stream 306, but with a reduced number of pixels and with a pixel precision of 16 bits per pixel to extract a position of a horizon in the data stream 306 and to analyze an image brightness. The processing unit 304 may generate a compressed image based using a blur filter (based on the entire scenery with a reduced amount of pixels and a reduced pixel precision). The processing unit 305 may generate a compressed image based on the use of configurable convolution filter(s) for edge detection (e.g., for lane detection and/or collision warning purposes). The processing units 303 to 305 generate these images at same or at different time intervals resulting in data streams 307 to 309 with same or different data rates (also depending on the size of each image of the respective data stream 307 to 309).
The solution suggested allows for easy software partitioning between generic image compression focused on pre-processing and image post-processing by the general purpose processor (next stage processing, see, e.g., controller 402 in FIG. 4).
It is an option that processing, in particular by hardware accelerators, can be done line-based or image-based. For example, line-based processing can be used to reduce the memory required for buffering.
It is another option that each of the processing units 303 to 305 (or at least some of them) comprises a lossless down-sampling stage.
It is a further option that the at least one of the processing units 303 to 305 may provide an output at different image rates also having a significant impact on the data rate of each of the data streams 307 to 309. For example, the processing units 304 and 305 may process the data stream 306 and provide an output at a certain clock rate, i.e. after time-intervals of a length t. The processing unit 303 may conduct a different operation, which may, e.g., be complex or required less often, and it may provide the result of its processing at a different clock rate, e.g., at a time interval amounting to 2t, in this example every second time interval t compared to the results provided by the processing units 304 and 305. Hence, based on the processing capability of each processing unit 303 to 305 and/or the use case scenario, the workload of processing the data stream 306 towards various objectives can be efficiently distributed among the pixelpaths, i.e. the processing units 303 to 305.
It may also be an option to enable processing of at least one of the processing units 303 to 305 by an external signal and/or configuration flag so that the external signal and/or configuration flag may define which processing unit(s) will preprocess the acquired image when a subsequent image is acquired. For example, a change in the light condition determined by a sensor can be used to start or re-start at least one of the processing units 303 to 305.
Beneficially, the examples presented use pixelpaths to provide efficient pre-processing according to use case requirements. The data rate of the input data stream 306 is substantially reduced by the combination of the data streams 307 to 309 supplied by the processing units 303 to 305. Hence, the amount of memory (e.g., surface on a chip used for RAM) can be significantly reduced or—as an alternative—higher data rates (e.g., more pixelpaths) can be processed, which results in either cheaper products or products of higher performance utilizing the same chip area. This is in particular useful for single-chip solutions utilizing embedded RAM.
Example: Conversion Via Interface
FIG. 4 shows an exemplary architecture comprising the pixelpaths of FIG. 3, wherein the data streams 307 to 309 are stored in a memory 401 (in particular an embedded RAM).
An interface 403 and/or an interface 404 are provided to obtain information of the data streams 307 to 309 from the memory 401 and to convert it in a format that is suitable and/or efficient for a controller 402 to process. The controller 402 and the interfaces 403, 404 are coupled via a bus 405. The controller 402 may be or it may comprise a central processing unit, a signal processor, a microcontroller or the like.
For example, the data streams 307 to 309 stored in the memory 401 may have a pixel precision amounting to 8 bits, but the controller 402 may operate efficiently on 16 bits rather than 8 bits. Hence, the interfaces 403 and 404 (respectively or in combination) may provide a conversion from 8 bits to 16 bits for the controller 402 being able to process the data streams 307 to 309 (or portions thereof) in an efficient manner.
The approach of conversion has the advantage that the memory 401 can still be used efficiently, because 8 bit image data is transformed to 16 bit image data by the interfaces 403, 404 and no such 16 bit image data need to be stored in the memory 401.
Hence, the interfaces 403 and 404 can be designed such that pixel information can be stored per minimum required precision and it may be transformed to a format suitable for the controller 402.
It is noted that at least one such interface can be provided for conversion purposes. Also, several interfaces 403 and 404 can be used. As an option, the interface 403 is arranged to read pixels in the format stored in memory 401 and passes these pixels to the controller 402 without any conversion. It is also an option that the interface 404 is arranged to read pixels stored in the memory 401, converts them in the format suitable for the controller 402 and forwards the converted pixels to the controller 402.
The conversion may comprise an up conversion, e.g., from 8 bits to 16 bits or from 4 bits to 16 bits. The conversion may also comprise a down-conversion, e.g., from 32 bits to 16 bits or the like.
Further Advantages and Examples
The solution presented enables a beneficial hardware configuration to cover, e.g., camera-based ADAS with reduces RAM requirements. For example, a sensor comprising 1280×720 pixels may require 1.8 MByte RAM buffer. By reducing the data rate of the data stream, the RAM required may be reduced to 690 KByte, which saves more than 1.1 MByte of RAM. This corresponds to an area of 4 square millimeter silicon area in 40 nm and results in a reduction to ⅓.
Using hardware accelerators beneficially allows for high energy efficiency, which may also increase the life period of attached components like, e.g., (signal) processors.
The solution presented further enables using the same silicon to cover radar and image processing as 1 MByte of RAM may suffice for processing radar and image data.
The solution presented may be applicable in the field of personal transportation or in industrial applications. It is in particular useful when implemented in moving entities, e.g., vehicles of any kind, in particular where it is of interest to detect and/or monitor their surroundings and/or determine particularities of their surroundings.
The architecture suggested could be used in combination with an ADAS preprocessor allowing a significant memory reduction. This can in particular be beneficial with regard to at least one of the following applications: high beam assist, lane departure warning, forward collision warning.
The approach can also be utilized for optimizing or reducing power consumption, because of its low memory requirements.
At least one of the following examples and/or embodiments may be considered innovative. They might be combined with other aspects or embodiments as described. Any embodiment or design described herein is not necessarily to be construed as preferred or advantageous over other embodiments or designs.
A device for processing an image data stream is suggested,
    • comprising a first processing unit and a second processing unit for receiving the image data stream;
    • wherein the first processing unit is arranged for providing a first data stream, wherein the first data stream has a reduced bandwidth compared to the image data stream;
    • wherein the second processing unit is arranged for providing a second data stream, wherein the second data stream has a reduced bandwidth compared to the image data stream.
In an embodiment, the first processing unit is arranged to provide at least one of the following:
    • generate the first data stream from the image data stream at a first compression rate;
    • generate the first data stream from the image data stream with a first resolution;
    • generate the first data stream from the image data stream based on a portion of the scenery of the image data stream;
    • generate the first data stream from the image data stream based on a filtering operation.
In an embodiment, the first compression rate results in a lossless compression or in a lossy compression of the image data stream.
In an embodiment, the first resolution comprises at least one of the following:
    • a reduced amount of pixels compared to the images of the image data stream;
    • a reduced amount of pixel precision compared to the images of the image data stream;
    • a reduced amount of images per time compared to the images per time of the image data stream.
In an embodiment, the second processing unit is arranged to provide at least one of the following:
    • generate the second data stream from the image data stream at a second compression rate;
    • generate the second data stream from the image data stream with a second resolution;
    • generate the second data stream from the image data stream based on a portion of the scenery of the image data stream;
    • generate the second data stream from the image data stream based on a filtering operation.
In an embodiment, the second compression rate results in a lossless compression or in a lossy compression of the image data stream.
In an embodiment, the second resolution comprises at least one of the following:
    • a reduced amount of pixels compared to the images of the image data stream;
    • a reduced amount of pixel precision compared to the images of the image data stream;
    • a reduced amount of images per time compared to the images per time of the image data stream.
In an embodiment, the portion of the scenery of the image data stream is determined via image processing, in particular region extraction.
In an embodiment, the filtering operation comprises at least one of the following:
    • a low-pass filtering;
    • a high-pass filtering;
    • a blur filtering;
    • a convolution filtering.
In an embodiment, the first data stream is determined based on the image data stream by processing images of the image data stream line by line.
In an embodiment, the second data stream is determined based on the image data stream by processing images of the image data stream line by line.
In an embodiment, an acquisition rate of the first processing unit and an acquisition rate of the second processing unit is each configurable.
In an embodiment, the first data stream has a different number of images per time interval compared to the image data stream.
In an embodiment, the second data stream has a different number of images per time interval compared to the image data stream.
In an embodiment, the device comprises a memory for at least partially storing the first data stream and the second data stream.
In an embodiment, that said memory is a shared memory, wherein the first processing unit is associated with a first portion of the shared memory and the second processing unit is associated with a second portion of the shared memory.
In an embodiment, said memory comprises a random access memory.
In an embodiment, the first processing unit, the second processing unit and the random access memory are arranged on a single chip.
In an embodiment, the device comprises at least one interface that is arranged for accessing the memory and for converting data of the first data stream and of the second data stream into a predefined format.
In an embodiment, the predefined format comprises a number of bits used by a controller that is arranged for processing images from the first data stream and the second data stream.
A device is suggested
    • comprising a first processing unit coupled to an input image data stream, wherein the first processing unit is arranged for providing a first data stream, wherein the first data stream has a reduced bandwidth compared to the input image data stream;
    • comprising a second processing unit coupled to the input image data stream, wherein the second processing unit is arranged for providing a second data stream, wherein the second data stream has a reduced bandwidth compared to the image data stream;
    • comprising a memory for at least partially storing the first data stream and the second data stream.
In an embodiment, the components of the image pre-processing stage are arranged on a single chip and wherein the memory is a random access memory arranged on the single chip.
A method is suggested for processing an image data stream comprising the following steps:
    • providing a first data stream, wherein the first data stream has a reduced bandwidth compared to the image data stream;
    • providing a second data stream, wherein the second data stream has a reduced bandwidth compared to the image data stream;
    • wherein the first data stream is provided by a first processing unit and the second data stream is provided by a second processing unit.
In an embodiment, the method comprising the step:
    • at least partially storing the first data stream and the second data stream in a memory.
In an embodiment, the step of providing the first data stream comprises at least one of the following:
    • generating the first data stream from the image data stream at a first compression rate;
    • generating the first data stream from the image data stream with a first resolution;
    • generating the first data stream from the image data stream based on a portion of the scenery of the image data stream;
    • generating the first data stream from the image data stream based on a filtering operation.
In an embodiment, the first resolution comprises at least one of the following:
    • a reduced amount of pixels compared to the images of the image data stream;
    • a reduced amount of pixel precision compared to the images of the image data stream;
    • a reduced amount of images per time compared to the images per time of the image data stream.
In an embodiment, the step of providing the second data stream comprises at least one of the following:
    • generating the second data stream from the image data stream at a second compression rate;
    • generating the second data stream from the image data stream with a second resolution;
    • generating the second data stream from the image data stream based on a portion of the scenery of the image data stream;
    • generating the second data stream from the image data stream based on a filtering operation.
In an embodiment, the second resolution comprises at least one of the following:
    • a reduced amount of pixels compared to the images of the image data stream;
    • a reduced amount of pixel precision compared to the images of the image data stream;
    • a reduced amount of images per time compared to the images per time of the image data stream.
A system is provided for processing an image data stream comprising:
    • means for providing a first data stream, wherein the first data stream has a reduced bandwidth compared to the image data stream;
    • means for providing a second data stream, wherein the second data stream has a reduced bandwidth compared to the image data stream.
Although various exemplary embodiments of the invention have been disclosed, it will be apparent to those skilled in the art that various changes and modifications can be made which will achieve some of the advantages of the invention without departing from the spirit and scope of the invention. It will be obvious to those reasonably skilled in the art that other components performing the same functions may be suitably substituted. It should be mentioned that features explained with reference to a specific figure may be combined with features of other figures, even in those cases in which this has not explicitly been mentioned. Further, the methods of the invention may be achieved in either all software implementations, using the appropriate processor instructions, or in hybrid implementations that utilize a combination of hardware logic and software logic to achieve the same results. Such modifications to the inventive concept are intended to be covered by the appended claims.

Claims (29)

What is claimed is:
1. A device for processing an image data stream, comprising:
a first processing unit and a second processing unit for receiving the image data stream;
wherein the first processing unit is arranged for providing a first data stream, wherein the first data stream has a reduced bandwidth compared to the image data stream, the first data stream including a full or partial representation of a scene associated with the image data stream, the full or partial representation of the scene having a reduced resolution with respect to the image data stream;
wherein the second processing unit is arranged for providing a second data stream, wherein the second data stream has a reduced bandwidth compared to the image data stream, the second data stream including a partial representation of the scene associated with the image data stream, the partial representation of the scene having a full resolution or a reduced resolution with respect to the image data stream.
2. The device according to claim 1, wherein the first processing unit is arranged to provide at least one of the following:
generate the first data stream from the image data stream at a first compression rate;
generate the first data stream from the image data stream with a first resolution;
generate the first data stream from the image data stream based on a portion of the scenery of the image data stream;
generate the first data stream from the image data stream based on a filtering operation.
3. The device according to claim 2, wherein the first compression rate results in a lossless compression or in a lossy compression of the image data stream.
4. The device according to claim 2, wherein the first resolution comprises at least one of the following:
a reduced amount of pixels compared to the images of the image data stream;
a reduced amount of pixel precision compared to the images of the image data stream;
a reduced amount of images per time compared to the images per time of the image data stream.
5. The device according to claim 2, wherein the filtering operation comprises at least one of the following:
a low-pass filtering;
a high-pass filtering;
a blur filtering;
a convolution filtering.
6. The device according to claim 1, wherein the second processing unit is arranged to provide at least one of the following:
generate the second data stream from the image data stream at a second compression rate;
generate the second data stream from the image data stream with a second resolution;
generate the second data stream from the image data stream based on a portion of the scenery of the image data stream;
generate the second data stream from the image data stream based on a filtering operation.
7. The device according to claim 6, wherein the second compression rate results in a lossless compression or in a lossy compression of the image data stream.
8. The device according to claim 6, wherein the second resolution comprises at least one of the following:
a reduced amount of pixels compared to the images of the image data stream;
a reduced amount of pixel precision compared to the images of the image data stream;
a reduced amount of images per time compared to the images per time of the image data stream.
9. The device according to claim 6, wherein the filtering operation comprises at least one of the following:
a low-pass filtering;
a high-pass filtering;
a blur filtering;
a convolution filtering.
10. The device according to claim 1, wherein the first data stream is determined based on the image data stream by processing images of the image data stream line by line.
11. The device according to claim 1, wherein the second data stream is determined based on the image data stream by processing images of the image data stream line by line.
12. The device according to claim 1, wherein an acquisition rate of the first processing unit and an acquisition rate of the second processing unit is each configurable.
13. The device according to claim 1, wherein the first data stream has a different number of images per time interval compared to the image data stream.
14. The device according to claim 1, wherein the second data stream has a different number of images per time interval compared to the image data stream.
15. The device according to claim 1, comprising a memory for at least partially storing the first data stream and the second data stream.
16. The device according to claim 15, wherein said memory is a shared memory, wherein the first processing unit is associated with a first portion of the shared memory and the second processing unit is associated with a second portion of the shared memory.
17. The device according to claim 15, wherein said memory comprises a random access memory.
18. The device according to claim 17, wherein the first processing unit, the second processing unit and the random access memory are arranged on a single chip.
19. The device according to claim 15, comprising at least one interface that is arranged for accessing the memory and for converting data of the first data stream and of the second data stream into a predefined format.
20. The device according to claim 19, wherein the predefined format comprises a number of bits used by a controller that is arranged for processing images from the first data stream and the second data stream.
21. A device, comprising:
a first processing unit coupled to an input image data stream, wherein the first processing unit is arranged for providing a first data stream, wherein the first data stream has a reduced bandwidth compared to the input image data stream, the first data stream including a full or partial representation of a scene associated with the input image data stream, the full or partial representation of the scene having a reduced resolution with respect to the image data stream;
a second processing unit coupled to the input image data stream, wherein the second processing unit is arranged for providing a second data stream, wherein the second data stream has a reduced bandwidth compared to the image data stream I the second data stream including a partial representation of the scene associated with the input image data stream, the partial representation of the scene having a full resolution or a reduced resolution with respect to the image data stream;
a memory for at least partially storing the first data stream and the second data stream.
22. The device according to claim 21, wherein the processing units are arranged on a single chip and wherein the memory is a random access memory arranged on the single chip.
23. A method for processing an image data stream, comprising:
providing a first data stream, wherein the first data stream has a reduced bandwidth compared to the image data stream, the first data stream including a full or partial representation of a scene associated with the image data stream, the full or partial representation of the scene having a reduced resolution with respect to the image data stream;
providing a second data stream, wherein the second data stream has a reduced bandwidth compared to the image data stream, the second data stream including a partial representation of the scene associated with the image data stream, the partial representation of the scene having a full resolution or a reduced resolution with respect to the image data stream;
wherein the first data stream is provided by a first processing unit and the second data stream is provided by a second processing unit.
24. The method according to claim 23, further comprising:
at least partially storing the first data stream and the second data stream in a memory.
25. The method according to claim 23, wherein providing the first data stream comprises at least one of the following:
generating the first data stream from the image data stream at a first compression rate;
generating the first data stream from the image data stream with a first resolution;
generating the first data stream from the image data stream based on a portion of the scenery of the image data stream;
generating the first data stream from the image data stream based on a filtering operation.
26. The method according to claim 25, wherein the first resolution comprises at least one of the following:
a reduced amount of pixels compared to the images of the image data stream;
a reduced amount of pixel precision compared to the images of the image data stream;
a reduced amount of images per time compared to the images per time of the image data stream.
27. The method according to claim 23, wherein providing the second data stream comprises at least one of the following:
generating the second data stream from the image data stream at a second compression rate;
generating the second data stream from the image data stream with a second resolution;
generating the second data stream from the image data stream based on a portion of the scenery of the image data stream;
generating the second data stream from the image data stream based on a filtering operation.
28. The method according to claim 27, wherein the second resolution comprises at least one of the following:
a reduced amount of pixels compared to the images of the image data stream;
a reduced amount of pixel precision compared to the images of the image data stream;
a reduced amount of images per time compared to the images per time of the image data stream.
29. A system for processing an image data stream, comprising:
means for providing a first data stream, wherein the first data stream has a reduced bandwidth compared to the image data stream, the first data stream including a full or partial representation of a scene associated with the image data stream, the full or partial representation of the scene having a reduced resolution with respect to the image data stream;
means for providing a second data stream, wherein the second data stream has a reduced bandwidth compared to the image data stream, the second data stream including a partial representation of the scene associated with the image data stream, the partial representation of the scene having a full resolution or a reduced resolution with respect to the image data stream.
US13/916,446 2013-06-12 2013-06-12 Device, method and system for processing an image data stream Active 2035-11-20 US10796617B2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US13/916,446 US10796617B2 (en) 2013-06-12 2013-06-12 Device, method and system for processing an image data stream
JP2014119183A JP5902234B2 (en) 2013-06-12 2014-06-10 Apparatus, method and system for processing an image data stream
KR1020140070922A KR101642181B1 (en) 2013-06-12 2014-06-11 Device, method and system for processing an image data stream
DE102014008893.6A DE102014008893A1 (en) 2013-06-12 2014-06-12 Apparatus, method and system for processing an image data stream
CN201410262419.4A CN104244014B (en) 2013-06-12 2014-06-12 For handling the device, method and system of image data stream

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/916,446 US10796617B2 (en) 2013-06-12 2013-06-12 Device, method and system for processing an image data stream

Publications (2)

Publication Number Publication Date
US20140368514A1 US20140368514A1 (en) 2014-12-18
US10796617B2 true US10796617B2 (en) 2020-10-06

Family

ID=52009860

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/916,446 Active 2035-11-20 US10796617B2 (en) 2013-06-12 2013-06-12 Device, method and system for processing an image data stream

Country Status (5)

Country Link
US (1) US10796617B2 (en)
JP (1) JP5902234B2 (en)
KR (1) KR101642181B1 (en)
CN (1) CN104244014B (en)
DE (1) DE102014008893A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11405565B2 (en) * 2014-05-08 2022-08-02 Sony Group Corporation Information processing device and information processing method

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9904595B1 (en) 2016-08-23 2018-02-27 Texas Instruments Incorporated Error correction hardware with fault detection
JP2018158545A (en) * 2017-03-23 2018-10-11 富士ゼロックス株式会社 Droplet discharge device
DE102017108016A1 (en) * 2017-04-13 2018-10-18 Carl Zeiss Microscopy Gmbh Microscope system and method for operating a microscope system
CN107333107A (en) * 2017-07-21 2017-11-07 广东美的制冷设备有限公司 Monitor image pickup method, device and its equipment
US11215999B2 (en) * 2018-06-20 2022-01-04 Tesla, Inc. Data pipeline and deep learning system for autonomous driving

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6097435A (en) * 1997-01-31 2000-08-01 Hughes Electronics Corporation Video system with selectable bit rate reduction
US20010040581A1 (en) * 2000-02-04 2001-11-15 Alliance Semiconductor Corporation Shared memory graphics accelerator system
JP2003348542A (en) 2002-05-27 2003-12-05 Canon Inc Digital video camcorder
US20040141067A1 (en) * 2002-11-29 2004-07-22 Fujitsu Limited Picture inputting apparatus
JP2004320502A (en) 2003-04-17 2004-11-11 Matsushita Electric Ind Co Ltd Photographing device
US20050190274A1 (en) 2004-02-27 2005-09-01 Kyocera Corporation Imaging device and image generation method of imaging device
US20070055803A1 (en) * 2005-09-02 2007-03-08 Fuchs Kenneth C Method and apparatus for enforcing independence of processors on a single IC
US20090022403A1 (en) 2007-07-20 2009-01-22 Fujifilm Corporation Image processing apparatus, image processing method, and computer readable medium
KR20100004546A (en) 2008-07-04 2010-01-13 대한민국(관리부서 : 농림수산식품부 국립수의과학검역원) Miniarray for differential diagnosis between the porcine circovirus type 1 and 2
US20100232497A1 (en) * 2009-03-10 2010-09-16 Macinnis Alexander G Lossless and near-lossless image compression
US20100278271A1 (en) * 2009-05-01 2010-11-04 Maclnnis Alexander G Method And System For Adaptive Rate Video Compression And Transmission
US20120032960A1 (en) 2009-04-20 2012-02-09 Fujifilm Corporation Image processing apparatus, image processing method, and computer readable medium
US20120257079A1 (en) * 2011-04-06 2012-10-11 Dolby Laboratories Licensing Corporation Multi-Field CCD Capture for HDR Imaging
US20120314948A1 (en) 2011-06-07 2012-12-13 Qualcomm Incorporated Multiple description coding with plural combined diversity
US20130021504A1 (en) * 2011-07-20 2013-01-24 Broadcom Corporation Multiple image processing
US20130088600A1 (en) 2011-10-05 2013-04-11 Xerox Corporation Multi-resolution video analysis and key feature preserving video reduction strategy for (real-time) vehicle tracking and speed enforcement systems
US20130265311A1 (en) * 2012-04-04 2013-10-10 Samsung Electronics Co., Ltd. Apparatus and method for improving quality of enlarged image
US8725990B1 (en) * 2004-11-15 2014-05-13 Nvidia Corporation Configurable SIMD engine with high, low and mixed precision modes
US8917778B2 (en) 2009-06-11 2014-12-23 Sony Corporation Image processing apparatus and image processing method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100866482B1 (en) 2004-01-29 2008-11-03 삼성전자주식회사 Monitoring system and method for using the same
JP2009048620A (en) * 2007-07-20 2009-03-05 Fujifilm Corp Image processor, image processing method, and program
KR101067599B1 (en) * 2010-01-18 2011-09-27 (주)나노포인트 Vehicle black box device that transmits low and high resolution video information to remote locations

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6097435A (en) * 1997-01-31 2000-08-01 Hughes Electronics Corporation Video system with selectable bit rate reduction
US20010040581A1 (en) * 2000-02-04 2001-11-15 Alliance Semiconductor Corporation Shared memory graphics accelerator system
JP2003348542A (en) 2002-05-27 2003-12-05 Canon Inc Digital video camcorder
US20040141067A1 (en) * 2002-11-29 2004-07-22 Fujitsu Limited Picture inputting apparatus
JP2004320502A (en) 2003-04-17 2004-11-11 Matsushita Electric Ind Co Ltd Photographing device
US20050190274A1 (en) 2004-02-27 2005-09-01 Kyocera Corporation Imaging device and image generation method of imaging device
US8725990B1 (en) * 2004-11-15 2014-05-13 Nvidia Corporation Configurable SIMD engine with high, low and mixed precision modes
US20070055803A1 (en) * 2005-09-02 2007-03-08 Fuchs Kenneth C Method and apparatus for enforcing independence of processors on a single IC
US20090022403A1 (en) 2007-07-20 2009-01-22 Fujifilm Corporation Image processing apparatus, image processing method, and computer readable medium
KR20100004546A (en) 2008-07-04 2010-01-13 대한민국(관리부서 : 농림수산식품부 국립수의과학검역원) Miniarray for differential diagnosis between the porcine circovirus type 1 and 2
US20100232497A1 (en) * 2009-03-10 2010-09-16 Macinnis Alexander G Lossless and near-lossless image compression
US20120032960A1 (en) 2009-04-20 2012-02-09 Fujifilm Corporation Image processing apparatus, image processing method, and computer readable medium
US20100278271A1 (en) * 2009-05-01 2010-11-04 Maclnnis Alexander G Method And System For Adaptive Rate Video Compression And Transmission
US8917778B2 (en) 2009-06-11 2014-12-23 Sony Corporation Image processing apparatus and image processing method
US20120257079A1 (en) * 2011-04-06 2012-10-11 Dolby Laboratories Licensing Corporation Multi-Field CCD Capture for HDR Imaging
US20120314948A1 (en) 2011-06-07 2012-12-13 Qualcomm Incorporated Multiple description coding with plural combined diversity
US20130021504A1 (en) * 2011-07-20 2013-01-24 Broadcom Corporation Multiple image processing
US20130088600A1 (en) 2011-10-05 2013-04-11 Xerox Corporation Multi-resolution video analysis and key feature preserving video reduction strategy for (real-time) vehicle tracking and speed enforcement systems
US20130265311A1 (en) * 2012-04-04 2013-10-10 Samsung Electronics Co., Ltd. Apparatus and method for improving quality of enlarged image

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11405565B2 (en) * 2014-05-08 2022-08-02 Sony Group Corporation Information processing device and information processing method

Also Published As

Publication number Publication date
KR101642181B1 (en) 2016-07-22
JP5902234B2 (en) 2016-04-13
CN104244014A (en) 2014-12-24
KR20140145090A (en) 2014-12-22
DE102014008893A1 (en) 2014-12-18
JP2015002558A (en) 2015-01-05
CN104244014B (en) 2019-04-30
US20140368514A1 (en) 2014-12-18

Similar Documents

Publication Publication Date Title
US10796617B2 (en) Device, method and system for processing an image data stream
US20220244388A1 (en) Imaging device and electronic device
US11910123B2 (en) System for processing image data for display using backward projection
US10445402B1 (en) Fast and energy-efficient region of interest pooling for object detection with convolutional neural network
US11042770B2 (en) Artificial intelligence based image data processing method and image sensor
US10148938B2 (en) Vehicle-mounted image recognition device to set a stereoscopic-vision and monocular-vision image areas
US10784892B1 (en) High throughput hardware unit providing efficient lossless data compression in convolution neural networks
US11433809B2 (en) Vehicle vision system with smart camera video output
US11508156B2 (en) Vehicular vision system with enhanced range for pedestrian detection
WO2018061740A1 (en) Image generation device, image generation method, program, recording medium, and image processing system
US20220294467A1 (en) Processing of lossy-compressed adas sensor data for driver assistance systems
US11620816B1 (en) Hardware efficient RoI align
JP5872171B2 (en) Camera system
US11302035B2 (en) Processing images using hybrid infinite impulse response (TTR) and finite impulse response (FIR) convolution block
KR101765290B1 (en) Image pre-processor for vehicle and image pre-process method using the same
US10917655B2 (en) Video data processing using an image signatures algorithm to reduce data for visually similar regions
JP6669062B2 (en) Image processing device
US20220165102A1 (en) Sensing apparatus and control system for automotive
US20240073390A1 (en) Image processing device and image processing method
JP2023061906A (en) Method for encoding video stream
JP2010219865A (en) Image processing system

Legal Events

Date Code Title Description
AS Assignment

Owner name: INFMEON TECHNOLOGIES AG, GERMANY

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROGER, ANDRE';YGNACE, ROMAIN;SIGNING DATES FROM 20130620 TO 20130702;REEL/FRAME:030908/0311

STCV Information on status: appeal procedure

Free format text: ON APPEAL -- AWAITING DECISION BY THE BOARD OF APPEALS

STCV Information on status: appeal procedure

Free format text: BOARD OF APPEALS DECISION RENDERED

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4