GB2593940A - A method for low bandwidth automotive video streaming - Google Patents

A method for low bandwidth automotive video streaming Download PDF

Info

Publication number
GB2593940A
GB2593940A GB2008291.3A GB202008291A GB2593940A GB 2593940 A GB2593940 A GB 2593940A GB 202008291 A GB202008291 A GB 202008291A GB 2593940 A GB2593940 A GB 2593940A
Authority
GB
United Kingdom
Prior art keywords
control unit
frames
video frames
pixel
differences
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2008291.3A
Other versions
GB202008291D0 (en
Inventor
Mihai Bahnareanu Andrei
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Plastic Omnium Lighting Systems GmbH
Original Assignee
Osram Continental GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Osram Continental GmbH filed Critical Osram Continental GmbH
Publication of GB202008291D0 publication Critical patent/GB202008291D0/en
Publication of GB2593940A publication Critical patent/GB2593940A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/127Prioritisation of hardware or computational resources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/02Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments
    • B60Q1/04Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q1/00Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
    • B60Q1/02Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments
    • B60Q1/04Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights
    • B60Q1/14Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments the devices being headlights having dimming means
    • B60Q1/1407General lighting circuits comprising dimming circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/132Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/156Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/164Feedback from the receiver or from the transmission channel
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A method for controlling a lamp, particularly a head lamp (head light), of a vehicle e.g. car. The method comprises: receiving, by a first control unit, video frames destined for a lamp; encoding, by the first control unit of the vehicle, the video frames by computing respective differences between consecutive frames of the video frames; transmitting the encoded video frames via a bus to a second control unit; decoding, by a second control unit, the transmitted video frames transmitted by the first control unit; and controlling the lamp according to the decoded video frames. Video frame differences may be computed based on corresponding subdivided parts of consecutive frames e.g. symmetric pixel squares in the form of subdivided pixel squares, each containing different value information of respective differences; a reduced pixel range comprising only differences exceeding a pixel difference threshold may be used. Bit size per pixel may be reduced in the difference video frame during high data rate periods, and bit packing may be used. Partial frames may be sent based on refresh rate, with highest differences sent first. Fading, corresponding to differences, may be applied to partial frames based on their number.

Description

A METHOD FOR LOW BANDWIDTH AUTOMOTIVE VIDEO STREAMING DESCRIPTION
The invention relates to concepts for controlling light, in particular a head lamp of a car, and applications thereof and in particular to a method for controlling a lamp of a vehicle, for example a matrix lamp, an LED matrix lamp or a digital (micro-)mirror device (DMD).
For current implementations, a video stream is transferred in an uncompressed form using a high-speed custom interface. This is a proprietary interface that requires special serializer/de-serializer chips at both ends plus dedicated coaxial connectors and cables. This custom interface, while offering enough bandwidth and reliability, is not so common and is considerably expensive.
In a case where there is not enough bandwidth to transfer the video in an uncompressed manner, for the typical resolution values. A video codec is required to encode and decode the video at both ends of a line.
There are many video codecs available that could compress 20 the video to small enough data rates. However, they are usually not suitable.
For example, the discrete cosine compression used by many codecs create leaks between pixels in high contrast edge areas. This may be unacceptable for homologation purpos25 es.
Very low latency requirements prevent the usage of buffers. Most codecs rely on frame buffering to deal with high peaks of data rate. -2 -
Some codecs are too central computing unit (CPU) intensive or require hardware support.
There may be a demand to provide concepts for controlling light, in particular a head lamp of a car, which may re-5 duce at least one of these disadvantages.
Such a demand may be satisfied by a method for controlling a lamp. The method comprises receiving, by a first control unit, video frames destined for a lamp of a vehicle. The lamp may be any one of a matrix lamp, an LED ma-trix lamp or a digital (micro-)mirror device (DMD). Further, the method comprises encoding, by the first control unit, the video frames by computing respective differences between consecutive frames of the video frames. Further, the method comprises transmitting, by the first control unit, the encoded video frames via a bus to a second control unit. Further, the method comprises decoding, by the second control unit, the video frames transmitted by the first control unit. Further, the method comprises controlling the lamp of the vehicle according to the decoded video frames. The matrix lamp as described herein may be also considered as a light source with pixel level control.
Thus, latency requirements can be complied with.
Particularly advantageous configurations can be found in 25 the dependent claims.
For example, the bus may be a controller area network (CAN) bus. The first and the second control units may be dedicated electronic control units (ECUs) of the CAN bus -3 -connecting the first and second ECUs. Further, the method is not limited to CAN, it can work also for UART over CAN, Ethernet, FlexRay or other serial automotive busses.
Thus, the method may be easily incorporated into a stand-5 ard car CAN bus system.
For each device the data in a frame may be transmitted sequentially but in such a way that if more than one device transmits at the same time the highest priority device is able to continue while the others back off.
13 Frames may be received by all devices, including by the transmitting device. In particular, layer 1 and 2 may be defined by the CAN bus. The CAN bus may operate according to a multi-master principle, for example it connects several control units of equal status, in particular the first and second control units.
A Carrier Sense Multiple Access/Collision Detection (CSMA/CD) procedure may resolve collisions (simultaneous bus access) without damaging the winning, higher-priority message. For this purpose, the bits may be dominant or recessive -depending on their state (one dominant bit overwrites a recessive one). Logic 1 is recessive (Wired-AND). The data may be non-return-to-zero (NRZ)-coded, with bit stoppers for continuous synchronization even of bus stations with less stable oscillators. Cyclic redun-dancy checking is used for data backup. The bus may be embodied either with copper lines or via fibre optics.
In the case of copper lines, the CAN bus may operate with two twisted wires, CAN HIGH and CAN LOW (symmetrical signal transmission). CAN GND (ground) as a third wire is -4 -optional, but may be combined with a fourth wire for 5 V power supply.
At higher data rates (high-speed CAN), the voltage swing between the two states may be relatively small: In the recessive quiescent state, the differential voltage may be zero (both cores about 2.5 V above ground), in the dominant state it may be at least 2 V (CAN HIGH > 3.5 V, CAN LOW < 1.5 V).
The low-speed CAN, which is suitable for longer distanc10 es, may use a voltage swing of 7 V by setting the recessive quiescent levels to 5 V (CAN LOW) and 0 V (CAN HIGH). If one of the two lines fails, the voltage of the other line can be evaluated against ground. For slower buses, a single-wire system with the body as ground 15 may be sufficient. It may be embodied as a two-wire system, but in the event of a wire break, it may use single-wire operation as a fallback level in order to continue operation.
Each LED of the LED matrix lamp of the vehicle may be in the form of at least one individually packaged LED or in the form of at least one LED chip comprising one or more light-emitting diodes. Several LED chips can be mounted on a common substrate ("submount") and form one LED or be mounted individually or together, for example on a board (e.g. FR4, metal core board, etc.) ("CoB" -Chip on Board). The at least one LED can be equipped with at least one separate and/or common optics for beam guidance, for example with at least one Fresnel lens or collimator. Instead of or in addition to inorganic LEDs, e.g. based on AlInGaN or InGaN or AlInGaP, organic LEDs -5 - (OLEDs, e.g. polymer OLEDs) can generally also be used. The LED chips can be direct emitting or have an upstream phosphor. Alternatively, the light-emitting component can be a laser diode or a laser diode array. It is also con-ceivable to provide an OLED light layer or several OLED light layers or an OLED light area. The emission wavelengths of the light-emitting components can be in the ultraviolet, visible or infrared spectral range. The light-emitting components can also be equipped with their io own converter. Preferably, the LED chips emit white light in the standardized ECE white field of the automotive industry, for example realized by a blue emitter and a yellow/green converter.
The DMD as described with respect to the lamp may have a large number of mirrors (micro mirrors) which can he tilted at high frequency between two mirror positions and can each form a light pixel. The DMD is irradiated by a light source. Usually, in a first position of a mirror a light incident on the mirror may be emitted from the ve-hide headlight and in a second position it may be directed to an absorber surface.
Further, the differences may be computed by respectively comparing corresponding subdivided parts for each of the consecutive video frames.
Thus, latency issues can be complied with.
In a further advantageous embodiment, the subdivided parts associated with consecutive video frames may construct a difference video frame. -6 -
The subdivided parts may also be symmetric pixel squares in the form of subdivided pixel squares.
For example, each subdivided pixel square of the subdivided pixel squares contains different value information of the respective difference. The different value information of the respective difference may be in a reduced pixel value range comprising only differences which exceed a pixel difference threshold.
In particular, during high data rate periods, a bit size 10 per pixel may be reduced in the difference video frame.
In a further advantageous embodiment, bit packing is applied on the encoded video frames.
In an exemplary embodiment, the encoded video frames are sent as partial frames of a time frame established by a refresh rate. The partial frames may be arranged such that the partial frames with the highest differences are sent first.
The method may further comprise applying, by the second control unit, fading to every partial frame of the par-tial frames, in dependence of the number of partial frames. An amount of fading may thereby correspond to or be in line with the differences included along the partial frames.
Specifically, remaining partial frames may be dropped, 25 when indicated by the refresh rate. -7 -
The demand stated above may also be satisfied by a system for controlling a lamp of a vehicle. The system comprises a lamp of the vehicle. The system further comprises first and second control units. The first control unit is con-figured to receive video frames destined for the lamp of the vehicle. The first control unit is configured to encode the video frames by computing respective differences between consecutive frames of the video frames. Further, the first control unit is further configured to transmit the encoded video frames via a bus to the second control unit. The second control unit is configured to decode the video frames transmitted by the first control unit. Further, the second control unit is configured to control the lamp of the vehicle according to the decoded video frames.
The system may be implemented in a car. In particular, the lamp of the vehicle may be a head lamp of the car. The system may be included into a car, especially into an (already existing) CAN or CAN FE) or at least a part of it.
According to an embodiment, a computer program product is provided which comprises program code portions for carrying out a method according to the method as described above, when the computer program product is executed on one or more processing units.
Even if some of the aspects described above have been described in reference to the method, these aspects may also apply to the system. Likewise, the aspects described above in relation to system may be applicable in a corre-sponding manner to the method. -8 -
It is clear to a person skilled in the art that the statements set forth herein under use of hardware circuits, software means or a combination thereof may be implemented. The software means can be related to pro-grammed microprocessors or a general computer, an ASIC (Application Specific Integrated Circuit) and/or DSPs (Digital Signal Processors).
For example, the first and second control units, the CAN system and CAN bus as well as the system itself may be implemented partially as a computer, a logical circuit, an FPGA (Field Programmable Gate Array), a processor (for example, a microprocessor, microcontroller (uC) or an array processor)/a core/a CPU (Central Processing Unit), an FPU (Floating Point Unit), NPU (Numeric Processing Unit), an AMU (Arithmetic Tog calUnit), a Coprocessor (further microprocessor for supporting a main processor (CPU)), a GPGPU (General Purpose Computation on Graphics Processing Unit), a multi-core processor (for parallel computing, such as simultaneously performing arithmetic operations on multiple main processor(s) and/or graphical processor(s)) or a DSP.
It is further clear to the person skilled in the art that even if the herein-described details will be described in terms of a method, these details may also be implemented or realized in a suitable device, a computer processor or a memory connected to a processor, wherein the memory can be provided with one or more programs that perform the method, when executed by the processor. Therefore, methods like swapping and paging can be deployed. -9 -
It is also to be understood that the terms used herein are for purpose of describing individual embodiments and are not intended to be limiting. Unless otherwise defined, all technical and scientific terms used herein have the meaning which corresponds to the general understanding of the skilled person in the relevan7_ technical field of the present disclosure; they are to be understood too neither too far nor too narrow. If technical terms are used incorrectly in the present disclosure, and 13 thus do not reflect the technical concept of the present disclosure, these should be replaced by technical terms which convey a correct understanding to the skilled person in the relevant technical field of the present disclosure. The general terms used herein are to be con-strued based on the definition in the lexicon or the context. A too narrow interpretation should be avoided.
Although terms like "first" or "second" etc. may be used to describe different components or features, these components or features are not to be limited to these terms.
With the above terms, only one component is to be distinguished from the other. For example, a first component may be referred to as a second component without departing from the scope of the present disclosure; and a second component may also be referred to as a first compo-nent. The term "and/or" includes both combinations of the plurality of related features, as well as any feature of that plurality of the described plurality of features.
In the present case, if a component is "connected to", win communication with" or "accesses" another component, 30 this may mean that it is directly connected to or directly accesses the other component; however, it should be -10 -noted that another component may be therebetween. If, on the other hand, a component is "directly connected" to another component or "directly accesses" the other component, it is to be understood that no further components are present therebetween.
Other objects, features, advantages and applications will become apparent from the following description of non-limiting embodiments regarding the accompanying drawings. The same or similar components are always provided with the same or similar reference symbols. In the description of the present disclosure, detailed explanations of known connected functions or constructions are omitted, insofar as they are unnecessarily distracting from the present disclosure. In the drawings, all described and/or illus- trated features, alone or in any combination form the subject matter disclosed therein, irrespective of their grouping in the claims or their relations/references. The dimensions and proportions of components or parts shown in the figures are not necessarily to scale; these dimen-sions and proportions may differ from illustrations in the figures and implemented embodiments. In Particular, in the figures, the thicknesses of lines, layers and/or regions may be exaggerated for clarity.
In the following, the invention shall be explained in 25 more detail on the basis of embodiment(s). The figure(s) shows (show): Fig. la a schematic illustration of transition frames; -11 -Fig. lb a schematic illustration of different methods to encode a square; Fig. 2 a schematic illustration of a packed video frame; Fig. 3a a schematic illustration of data size reduc-tion; Fig. 3b a schematic illustration of bit packing; and Fig. 4 a schematic illustration of a video frame di-vided in CAN frames.
The method and the system will now be described with respect to the embodiments. In particular, without being restricted thereto, specific details are set forth to provide a thorough understanding of the present disclo-sure. However, it is clear to the skilled person that the present disclosure may be used in other embodiments, which may differ from the details set out below.
The method and system described herein can be applied to head lamps that are using a high-resolution matrix, such as an LED matrix or digital mirrors array (DMD). This head lamps projects a monochrome image (255 intensity level/pixel). The number of (LED) pixels can be in the range of tens of thousands and typical refresh rate of the image is 60Hz. This results in a quite substantial amount of data that must be transmitted reliable and with low latency between the ECU that generates the image and the ECU that controls the (LED) pixels. Consequently, the -12 -head lamp may be considered as a video projector that projects a video stream.
The compression method described herein may decrease the data rate down to the bandwidth of a Controller Area Net- work Flexible Data-Rate (CAN FD) interlace or UART over CAN, or Ethernet or other serial interface. In particular, FlexRay may be used as an automotive network communications protocol for on-board automotive computing. In consequence, the serializer chips, coaxial cable and con-can be replaced by a cheaper CAN interface.
Several methods are employed (successively) to achieve the desired compression rate: 1. Only differences between consecutive frames are sent. Evaluation of a delta (difference) between two frames is performed in small squares of for example 8x8 pixels. These squares represent also atomic objects that are transferred by the algorithm (see figure la).
2. During high data rate periods, a graceful degradation mechanism can be applied to maintain low latency (switch from 8 to 7 or 6 bit/pixel, temporary decrease resolution, nonlinear grayscale depth reduction). These mechanisms are adjusted dynamically with the aim to maintain low latency together with very good image rendering.
3. If the delta between two frames exceed the bandwidth 25 available then the delta data (differences) is divided in few transition frames, always maintaining a preset bus load.
4. The decision about what delta data includes in every transition frame is based on the magnitude of delta, as -13 -shown in figure la. This may be colloquially understood as 'how different are the new pixels comparing with the old ones". A blending coefficient is applied to new data to ensure smooth integration of new pixels.
5. A further operation is data size reduction (see figure 3a). This takes place on a square level. The concept consists of sending one offset and a smaller value for every pixel. This additional value is less than 8 hits. Thus, by applying bit packing, the amount of data is reduced.
This concept is very effective because of the localized nature of data.
To further enhance the compression efficiency there are defined four methods to encode a square (see figure lb).
Figure lb illustrates four types A, B, C and D. These 15 types are described as follows.
Type A (0x00): For this type of square only two additional bytes are sent. Offset and the pixel size.
Type B (0x01): This type is similar with the type A, except that not all pixels are sent. An 8 bytes pixel map keeps track of the position of pixel that are sent. If the bit is 1, then the corresponding pixel is present. The encoding of the pixel position is similar with the encoding of square p0-sition within a video frame. First bit of first byte corresponds to the top left pixel, then the next bit corresponds to the next pixel to the right. The first byte stores the first top row of pixels.
-14 -Type C (0x02): For this type there are two offset values and pixel sizes. All 64 pixels are sent. An 8 bytes offset map is used to keep track which offset correspond to which pixel. If the bit is 0, offset 1 is used. The pixel location is encoded in a similar manner with the pixel map from type B. Type D (0x03): For this type there are 4 offsets and 4 pixels size. All pixels are sent. Offset to pixel correspondence can be 10 derived from the figure.
The encoder, herein described as first control unit, evaluates the best encoding method for every square.
Once the squares that make up the delta between two frames are encoded, they are packed to minimize the amount of memory used in the form of applying bit packing (see figure 3b). For example, if the range of pixels value within a square is 0 to 15, then only 4 bits is enough to store a pixel. In this case bit packing packs two pix-els in a byte. Same principle applies for any pixel bit size.
It is possible that the total size of the delta data (delta squares) exceed the amount of data that can be sent during a frame duration. In this case, a number of partial frames are built. Each partial frame contains a number of squares that together can be sent in the time frame established by refresh rate.
-15 -This is exemplary shown in figure 2 as an example of a packed (partial) video frame. Herein, the Number of squares indicate how many squares are packed in current transition frame.
The frame position indicates the current frame position of the sequence of transition frames in transition. This can be used to select what fading coefficients is applied for the pixels at decoding. It is possible to replace it with square level transition position.
io Moreover, the square map is a 400 bits long bit field, for example. Every bit corresponds to a square. First bit corresponds to top left square, and so on. If the bit is 1, the square is included in current frame.
The Square A,E,E,D is a square headers list. Any combinala tion of the four header types is indicated thereby ("Number of squares" in total).
Pixel data defines packed pixel data and can be parsed using information extracted from previous fields.
Furthermore, the squares that form a partial frame are not picked randomly. They are added in descending order based on their delta. For this reason, a variety of counting based sorting algorithm may be incorporated. This allows efficient sorting without comparations or data moving.
The reason for grouping the squares in this way is to ensure that the image areas with highest visual impact are sent first. Since the receiver, herein the second control unit, does not get the whole squares at once, it is possible that the parts of the image received in a partial -16 -frame do not blend into the surrounding area. For this reason, a certain level of fading is applied to every partial frame. The more partial frames are received, the less fading is applied. When the last partial frame is received, the fading is removed completely.
The concept of partial frames relies on the assumption that the image content is not updated until all partial frames are sent. If this is not the case, and the encoder, namely the first control unit, is detecting that a new delta must be sent, the pending transmission of partial frames is dropped. The impact on image content is minimized since the remaining partial frames contain squares with low delta.
A partial frame is sent in chunks of 64 bytes in CAN FD messages with consecutive logic IDs. All CAN frames that belong to the same video frame have an incremental logic ID starting from 0. The logic ID is stored in the CAN ID field. Therefore, the CAN ID will be: CAN ID OFFSET + logic ID. The purpose of the logic ID is to allow the re-assembling of the video frame at the destination, namely the second control unit, and to check for missing data. If the last CAN frame does not have all the 64 bytes used, then the data length code (DLC) can be adjusted.
The last CAN frame of a video frame must have the id CAN ID EOF. This is exemplarily shown in figure 4 with the last CAN frame having the fixed ID:FF. This allows the receiver, namely the second control unit, to know that all data was transferred. A complete sequence of video frame transmission can look as follows: CAN ID = CAN ID OFFSET, Logic CAN ID = 0, DLC=64 _ -17 -CAN ID = CAN ID OFFSET + 1, Logic CAN ID = 1, DLC=64 CAN ID -CAN ID OFFSET + 2, Logic CAN ID -2, DLC-64 CAN ID -CAN ID OFFSET + n-1, Logic CAN ID -n-1, DLC-64 CAN ID = CAN ID EOF Logic CAN ID = n, DLC=variahle! The advantage of using this compression algorithm is that it allows to use a cheaper type of bus, for example the CAN or the CAN FL. Also, unlike the most common mpeg type video codecs, the method described here is tailored to automotive applications since it has a relatively low CPU requirement and does not use a hardware (HW) graphic ac-celerator, especially on the receiving side as the second control unit. The algorithm can be scalable to different resolutions or refresh rates.
The most time-consuming operations can be moved to a graphical processing unit (GPU) so the total CPU load can be further decreased (a GPU exist anyway because it is needed to create the images). If a higher speed interface is used, like Fill, the algorithm can be tailored to trade CPU load for bus load, for example implementing only del-ta transfer, without data size reduction. Further, the decoding requirements are much lower compared with encoding. This is matching the general architecture where the sender has a more powerful CPU. Thus, the first control -18 -unit may be adapted for a higher load than the second control unit. Consequently, power can be saved as well.

Claims (15)

  1. -19 -CLAIMS 1. A method for controlling a lamp of a vehicle, comprising: receiving, by a first control unit, video frames des-tined for a lamp; encoding, by the first control unit of the vehicle, the video frames by computing respective differences between consecutive frames of the video frames; transmitting, by the first control unit, The encoded video frames via a bus of the vehicle to a second control unit of the vehicle; decoding, by the second control unit, the video frames transmitted by the first control unit; and controlling the lamp according to the decoded video frames.
  2. 2. The method according to claim 1, wherein the bus is selected from the group consisting of a controller area network (CAN) bus, UART over CAN, Ethernet and FlexRay.
  3. 3. The method according to claim 1 or 2, wherein the differences are computed by respectively comparing corresponding subdivided parts for each of the consecutive video frames.
  4. -20 - 4. The method according to claim 3, wherein the subdivided parts associated with consecutive video frames construct a difference video frame.
  5. S. The method according to claim 3 or 4, wherein the subdivided parts are symmetric pixel squares in the form of subdivided pixel squares.
  6. 6. The method according to claim 5, wherein each subdivided pixel square of the subdivided pixel squares contains different value information of the respec-tive difference.
  7. 7. The method according to claim 6, wherein The different value information of the respective difference is in a reduced pixel value range comprising only differences which exceed a pixel difference threshold.
  8. 8. The method according to claim 4, wherein, during high data rate periods, a bit size per pixel is reduced in the difference video frame.
  9. 9. The method according to any one of the foregoing claims, wherein bit packing is applied on the encoded video frames.
  10. 10. The method according to any one of the foregoing claims, wherein the encoded video frames are sent as partial frames of a time frame established by a refresh rate.
  11. -21 - 11. The method according to claim 10, wherein the partial frames are arranged such that the partial frames with the highest differences are sent first.
  12. 12. The method according to claim 10 or 11, wherein the method further comprises: applying, by the second control unit, fading to every partial frame of the partial frames, in dependence of the number of partial frames.
  13. 13. The method according to claim 12, wherein an amount of fading corresponds to the differences included along the partial frames.
  14. 14. A system for controlling a lamp of a vehicle, comprising: a first control unit and a second control unit; wherein -the first control unit is configured to receive video frames destined for the lamp; -the first control unit is configured to encode the video frames by computing respective differences between consecutive frames of the video frames, wherein the first control unit is further configured to transmit the encoded video frames via a bus of the vehicle to the second control unit; and -22 -the second control unit is configured to decode the video frames transmitted by the first control unit and to control the lamp according to the decoded video frames.
  15. 15. A computer program product comprising program code portions for carrying out a method according to any one of claims 1 to 13, when the computer program product is executed on one or more processing units.
GB2008291.3A 2020-04-06 2020-06-02 A method for low bandwidth automotive video streaming Pending GB2593940A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP20465518 2020-04-06

Publications (2)

Publication Number Publication Date
GB202008291D0 GB202008291D0 (en) 2020-07-15
GB2593940A true GB2593940A (en) 2021-10-13

Family

ID=70289725

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2008291.3A Pending GB2593940A (en) 2020-04-06 2020-06-02 A method for low bandwidth automotive video streaming

Country Status (1)

Country Link
GB (1) GB2593940A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3136730A1 (en) * 2015-08-22 2017-03-01 Audi Ag Control of a light source and vehicle
WO2020053717A1 (en) * 2018-09-10 2020-03-19 Lumileds Holding B.V. Large led array with reduced data management

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3136730A1 (en) * 2015-08-22 2017-03-01 Audi Ag Control of a light source and vehicle
WO2020053717A1 (en) * 2018-09-10 2020-03-19 Lumileds Holding B.V. Large led array with reduced data management

Also Published As

Publication number Publication date
GB202008291D0 (en) 2020-07-15

Similar Documents

Publication Publication Date Title
EP2634983B1 (en) Data transmission apparatus, data transmission system and data transmission method
US8903000B2 (en) Transmission circuit, reception circuit, transmission method, reception method, communication system and communication method therefor
US10593256B2 (en) LED display device and method for operating the same
US9747872B2 (en) LED display device and method for operating the same
US20150295678A1 (en) Coding apparatus, coding method, data communication apparatus, and data communication method
US6911922B2 (en) Method to overlay a secondary communication channel onto an encoded primary communication channel
KR20210044709A (en) Interfaces for cost effective video communication within advanced vehicle headlamp circuits
US9704430B2 (en) LED display device and method for operating the same
GB2593940A (en) A method for low bandwidth automotive video streaming
CN114285472A (en) UPSOOK modulation method with forward error correction based on mobile phone camera
CN108109577B (en) LED system and method for operating an LED system
CN106448545B (en) LED display screen control system and receiving card and monitoring board thereof
WO2015127105A9 (en) Method and apparatus for aggregating and encoding received symbols including generation of a pointer for a control code
US7516237B2 (en) Scalable device-to-device interconnection
CN114793198B (en) Method for data transmission between two digitally controllable vehicle components
Chang A visible light communication link protection mechanism for smart factory
US20220191072A1 (en) Automobile
JP7419518B2 (en) Method for managing image data and automotive lighting device
US20230188379A1 (en) Method for managing image data, and vehicle lighting system
CN115485169A (en) Method for managing image data and vehicle lighting system
WO2022001166A1 (en) Interface, electronic device, and communication system
WO2021079009A1 (en) Method for managing image data and automotive lighting device
JP2022534075A (en) Select mode signal transfer between serially chained devices
JP2023505599A (en) IMAGE DATA MANAGEMENT METHOD AND AUTOMOTIVE LIGHTING DEVICE
CN117439851A (en) Decoding module for transceiver