WO2018153915A1 - Determining an angular position of a trailer without target - Google Patents

Determining an angular position of a trailer without target Download PDF

Info

Publication number
WO2018153915A1
WO2018153915A1 PCT/EP2018/054275 EP2018054275W WO2018153915A1 WO 2018153915 A1 WO2018153915 A1 WO 2018153915A1 EP 2018054275 W EP2018054275 W EP 2018054275W WO 2018153915 A1 WO2018153915 A1 WO 2018153915A1
Authority
WO
WIPO (PCT)
Prior art keywords
trailer
blocks
determining
angular position
map
Prior art date
Application number
PCT/EP2018/054275
Other languages
French (fr)
Inventor
Swaroop Kaggere Shivamurthy
Michael Starr
Original Assignee
Connaught Electronics Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Connaught Electronics Ltd. filed Critical Connaught Electronics Ltd.
Publication of WO2018153915A1 publication Critical patent/WO2018153915A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60DVEHICLE CONNECTIONS
    • B60D1/00Traction couplings; Hitches; Draw-gear; Towing devices
    • B60D1/01Traction couplings or hitches characterised by their type
    • B60D1/06Ball-and-socket hitches, e.g. constructional details, auxiliary devices, their arrangement on the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60DVEHICLE CONNECTIONS
    • B60D1/00Traction couplings; Hitches; Draw-gear; Towing devices
    • B60D1/24Traction couplings; Hitches; Draw-gear; Towing devices characterised by arrangements for particular functions
    • B60D1/245Traction couplings; Hitches; Draw-gear; Towing devices characterised by arrangements for particular functions for facilitating push back or parking of trailers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60DVEHICLE CONNECTIONS
    • B60D1/00Traction couplings; Hitches; Draw-gear; Towing devices
    • B60D1/58Auxiliary devices
    • B60D1/62Auxiliary devices involving supply lines, electric circuits, or the like
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D13/00Steering specially adapted for trailers
    • B62D13/06Steering specially adapted for trailers for backing a normally drawn trailer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30172Centreline of tubular or elongated structure
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle

Definitions

  • the present invention relates to a targetless method for determining an angular position of a trailer with respect to a towing vehicle, to which the trailer is coupled, wherein a raw image of at least a part of the trailer is obtained by means of a rear camera of the towing vehicle.
  • the present invention relates to an evaluation device for determining an angular position of a trailer with respect to a towing vehicle, to which the trailer is coupled, by a rear camera for the towing vehicle for obtaining a raw image of at least a part of the trailer.
  • a trailer assistance may be helpful which provides trailer's way detection, or when driving across a slope a trailer slip detection would be helpful.
  • the trailer yaw angle should be detected and the respective status of the trailer with respect to the vehicle should be indicated to the user constantly.
  • a trailer parking assistance system can for example be provided when the reverse gear is engaged.
  • the detected angle can be used for recognizing escalation of the trailer when the vehicle/trailer combination travels forward with a certain speed.
  • the detected angle can be used to recognize slipping of the trailer when the vehicle/trailer combination travels on an inclination.
  • the detected angle can also be used to overlay a trajectory of the trailer on a screen for the user depending on the current angular position for example in reversing. Numerous other possibilities of application for the detected angle are conceivable.
  • Volkswagen for example provides a camera-based trailer assistant to determine the trailer angle.
  • Jaguar Landrover provides a JLR trailer assistant, in which a known target (three black circles on white background) are attached to the trailer as target. The corresponding algorithm then detects the known target.
  • Ford also provides a trailer assistant, which detects a known target. A checkerboard pattern is there attached to the drawbar of the trailer. The algorithm again detects the known checkerboard pattern.
  • a rearview system with trailer angle detection is known.
  • a camera directed rearward at the towing vehicle rearward directed captures are made.
  • the trailer angle is calculated from the captured images of the rearward directed camera by means of a processor.
  • a known target pattern at the trailer is in particular evaluated.
  • a method for visual assistance by a graphic overlay on an image of a reversing camera is known.
  • the driver can for example be assisted in reversing to a trailer to head for the drawbar with the tow coupling as exactly as possible.
  • a camera model is provided to match the camera image in the vehicle coordinates with world coordinates. The method predicts the path of the vehicle corresponding to the current steering angle.
  • the printed matter US 2013/0158863 A1 discloses a prediction of a reversing path of a trailer with the aid of GPS and camera images.
  • a current position of the vehicle and the trailer is obtained with the aid of a GPS system.
  • the current location of the vehicle/trailer combination and a target position of the vehicle/trailer combination are presented on a screen.
  • the reversing path of the trailer is predicted depending on the steering angle of the vehicle and the angle between vehicle and trailer and overlaid on the screen.
  • the object of the present invention is in reliably determining the angle that a trailer occupies with respect to the towing vehicle with as little effort as possible.
  • a method for determining an angular position of a trailer with respect to a towing vehicle, to which the trailer is coupled is that angle, the trailer occupies relative to the longitudinal axis of the towing vehicle.
  • a raw image of at least a part of the trailer is obtained by means of a rear camera of the towing vehicle for this method.
  • a raw image is captured by means of a camera from the towing vehicle rearwardly directed to the trailer.
  • drawbar also called towbar
  • the raw image is divided into blocks. These blocks preferably have equal size. For example, the raw image is divided into NxM blocks. Each block may have the size of 8x8 pixels. For each of the blocks a texture value is determined. Thus, a block may be reduced to one pixel representing the texture value. This means that the raw image is transformed to a texture value map (first map).
  • Those blocks of the raw image or those pixels of the texture value map are labelled which meet a pregiven repetition criterion in the first map. For example, if a texture is repeated rarely or even not, it is an unusual texture. Such rare or unusual texture may represent the towbar of the trailer since it appears just once in the raw image.
  • the angular position of the trailer may be determined on the basis of the labelled blocks of the first map. For instance, the centre of the region with the labelled blocks with respect to the position of the towing hook or a respective normal of vehicle may represent the angle of the towbar or the trailer.
  • the method can be performed targetless, meaning that detection of the trailer can be performed without the user requiring a target sticker.
  • target based trailer assistant systems require the user to place an easy-to-detect target with a known design on the trailer. This helps the camera to identify the trailer that it is meant to track.
  • the blocks of the raw image are grouped into (horizontal) slices and the labelling of the blocks is performed on the basis of a histogram of texture of each of the slices.
  • the advantage of dividing the image into slices is that a different threshold can be applied for every cropped portion based on a localized histogram study. Under adverse circumstances, there is a chance to miss some portion of the trailer tow bar with a global histogram threshold.
  • slice based texture analysis is capable of handling changes in the surface (e.g. the road can have different types of surfaces)
  • Another embodiment includes the steps of determining a luminance value of each of the blocks and labelling blocks the luminance value of which meet a pregiven repetition criterion in a second map, wherein the step of determining the angular position of the trailer is performed on the basis of the labelled blocks of the first map and the second map.
  • the luminance analysis is performed on each of the blocks.
  • a low repetition rate of the luminance values may indicate the towbar of the trailer.
  • the second map is generated. The angular position of the towbar or trailer, respectively, can now be obtained from the information of the first map and the second map.
  • Such sectorizing may reduce the amount of blocks to be processed.
  • those one or more sectors can be selected for further processing which include the most labelled blocks, i.e. which show a very low repetition of texture values and/or luminance values, for example.
  • an edge detection may be performed on a boundary between labelled blocks and unlabeled blocks, wherein the angular position of the trailer is determined on the basis of one or more detected edges. If, for example, the region with the labelled blocks covers the region of the raw image which represents the towbar, the boundaries or edges of this region correspond to the edges of the towbar. Moreover, a pixel level edge refinement may be performed on the one or more detected edges. If the edges are obtained on block basis, the edges are very coarse. Therefore, a refinement of the edges on pixel basis improves the accuracy of the edges tremendously. Consequently, the angular position of the trailer can be performed more accurately, when using edges refined on pixel basis.
  • a centre of the labelled blocks may be identified for determining the angular position of the trailer.
  • the centre of the labelled blocks may, for example, be determined by using the detected edges. It is easy to identify a centre line of a left edge and a right edge of the region with the labelled blocks, i.e. the towbar, if the middle of the left towbar edge boundary and the corresponding right towbar edge boundary are identified for a couple of pixels towards the towball and a line fitting algorithm is applied.
  • the angle of the centre of the region of the labelled blocks with respect to the towball is measured versus a pregiven normal for determining the angular position of the trailer.
  • This pregiven normal may be the longitudinal axis of towing vehicle.
  • the difference between this normal and the centre line of the region with the labelled blocks may represent the angle of the towbar or trailer.
  • the pixels of the centre of the region of labelled blocks may be transferred into polar coordinates for determining the angular position of the trailer.
  • transformation into polar coordinates transforms an angle into a distance which is easier to be detected.
  • a specific angle of the trailer will be transformed into a certain horizontal position in a polar coordinate map.
  • the labelled blocks may represent the towbar of the trailer.
  • a towbar is characterized by a plurality of individual components which result in high texture differences and high luminance differences. Therefore, the towbar can be detected easily with the above described method.
  • an evaluation device for determining an angular position of a trailer with respect to a towing vehicle, to which the trailer is coupled including
  • Fig. 1 a software architecture for a targetless trailer assist system
  • Fig. 2 a flowchart of the functionality of determining the angle of the trailer
  • Fig. 3 a raw image divided into blocks
  • Fig. 4 the raw image divided into slices
  • Fig. 5 an example of labelling unusual texture in proceeded slices
  • Fig. 6 an example of labelling unusual luminance in proceeded slices
  • Fig. 7 an example of dividing the image into sectors to find a peak sector with an unusual texture and luminance
  • Fig. 8 the image of the trailer with a block level accurate towbar boundary
  • Fig. 9 an actual towbar centre line and a normal to determine the angle of the towbar.
  • the proposed method for determining an angular position of a trailer may be used for a trailer assistant system of a vehicle.
  • the method can be described as targetless, since no target sticker is necessary to be attached to the trailer.
  • the proposed method or trailer assist system is a vision based automated feedback solution.
  • the basic idea behind the solution is to segment the trailer towbar which has potential to provide features which are useful to determine the angle of the trailer with respect to the longitudinal axis of the towing vehicle.
  • the core idea is to use texture differences in an image to determine the position and/or angle of a trailer or component of it in the image.
  • any texture based segmentation can be used for the trailer towbar detection.
  • the proposed method may be embedded friendly and can be directly portable on any hardware without effort.
  • a corresponding system will provide better control of the vehicle with the trailer, so that the number of accidents due to various blind spots and instability created due to the trailer can be reduced.
  • Fig. 1 shows a software architecture for an embodiment of a targetless trailer assist system.
  • the main block of the system is a trailer feedback module 1 .
  • the trailer feedback module 1 receives input data 2, which may be a video frame or a sensor map.
  • the input data are subjected to a block cost metric module 3 of the trailer feedback module 1 .
  • the block cost metric module 3 extracts a texture feature of each block of the image, frame, map etc.
  • the extracted texture feature for each block is analyzed in an unusual texture analysis module 4.
  • the block cost metric module 3 extracts luminance data for each block of the image, frame, map etc.
  • the respective data are analyzed in an unusual luminance analysis module 5.
  • Unusual texture features and unusual luminance features obtained from the analysis modules 4 and 5 are used in a feature detection module 6 for detecting a unique feature on the towbar.
  • those blocks of the image are known which represent the towbar.
  • edge detection module 7 Based on these blocks a detection of edges of the towbar is performed in edge detection module 7.
  • the resulting edges are used in an angle detector 8 to obtain the trailer yaw angle.
  • the output of the angle detector 8 of the trailer feedback module 1 is an output signal 9 which can be used for controlling the brake, the acceleration or for providing save steering direction change feedback.
  • Fig. 2 represents a possible flowchart of an exemplary method for determining the angular position of a trailer.
  • the method starts at step S1 .
  • the functionality of the trailer feedback module 1 is defined through some set of configuration parameters.
  • the feedback module allocates and initializes necessary resources like memory buffers and data structures.
  • configuration parameters are initialized.
  • step S4 the incoming video frame 2 is divided into NxM macro blocks (the block size may be 8x8). Furthermore, sets of macro blocks may be grouped in separate slices. Finally, a block cost metric may be calculated for each block in step S4. This step may involve the extraction of texture or luminance, wherein each pixel of a first map is a representation of the texture of a respective block and one pixel of a second map is a representation of the luminance of this block.
  • step S5 an unusual texture analysis is performed on each slice. This step involves the extraction of less repeating texture patterns in each slice.
  • step S6 performs an unusual luminance analysis on each slice. This step involves the extraction of less repeating luminance patterns in each slice.
  • Step S7 performs a texture feature detection of the towbar. This step involves dividing the region of interest (ROI) into sectors around the towball centre and detection of the sector or sectors with peak unusual texture pattern across the sector/sectors. Additionally in step S7 a luminance feature detection is performed on the towbar. Similarly, this step involves dividing ROI into sectors around the towball centre and detection of the sector with the peak unusual luminance pattern across the sector. In step S8 towbar edge detection takes place. This step involves the detection of one or more edges specifically to the left and right of the towbar. In final step S9 detection of a trailer yaw angle is performed based on the towbar centre, for example. The process or the trailer feedback module 1 ends at step S10.
  • ROI region of interest
  • Fig. 3 shows an image of a rear camera of a vehicle towing a trailer 10.
  • the trailer has a towbar 1 1 which couples the trailer 10 to the vehicle (not shown).
  • the image is divided into equal sized blocks.
  • the number of blocks may be configurable (NxM).
  • N and M are set to multiples of 8 for better embedded performance.
  • a single block has the size of 8x8 pixels.
  • the blocks may be divided into equally sized configurable slices i.e. the slices have essentially the same extension within a specified region.
  • six horizontal slices sl1 to sl6 are defined in the centre region of the image.
  • the blocks are not shown for the sake of clarity.
  • the slices may have any form.
  • the horizontal slices as shown in Fig. 4 are efficient for unusual pattern studies.
  • the slices may be used to mask regions outside a predefined ROI (in Figs. 3 to 9 the ROI is below the semicircle) which will not be processed.
  • each column of blocks of the ROI may be split into a number of slices. There may be a configuration parameter to select the number of actual slices to be processed.
  • a block cost metric may be calculated for each block in order to simplify the image. Ideally, any block cost metric can be used for high level block based texture study. The block cost metric may determine block differences with respect to neighbour blocks. This step will extract a texture value for each block and optionally also a peak repeating luminance value of each block. The resulting two smaller maps (first map and second map) are further analyzed for the trailer towbar detection.
  • step S5 An unusual texture analysis (compare step S5) is performed on the first map with the texture values. This analysis may mainly involve the following steps:
  • an unusual luminance analysis may be performed as shown in Fig. 6. This analysis may involve the following steps:
  • Fig. 6 shows blocks 13 of unusual luminance. Again, only slices sH to sl3 are analyzed. The non-repeating blocks 13 mainly correspond with the towbar 1 1 .
  • a unique feature detection is performed on the towbar 1 1 as shown in Fig. 7.
  • This unique feature detection may involve the following steps:
  • Each sector size is equal to 180°divided by the number of sectors.
  • a towbar edge detection may be performed.
  • This edge detection may involve the detection of a towbar edge boundary 15 to the left of the towball and a towbar edge boundary 16 to the right of the towball as shown in Fig. 8.
  • a pixel level edge refinement may be done on edge blocks later to get an accurate pixel level boundary of the towbar on both sides.
  • a simple gradient check on the block level boundary and the neighbours can be performed.
  • the pixel level towbar boundaries can be used to determine the towbar centre. For example, an average of the left towbar edge boundary 15 and the corresponding right towbar edge boundary 16 may be used for each pixel line.
  • a line fitting algorithm can be used to find the exact centre line of the towbar 1 1 .
  • the trailer yaw angle detection has to be performed, i.e. the angular position of the trailer or towbar, respectively, has to be determined as shown in Fig. 9.
  • the angle detection can be performed by measuring the angle 19 of the trailer centre line 17 with respect to a normal 18.
  • the normal is representing an angle of 0°of the trailer with respect to the towing vehicle.
  • Another method for determining the angular position of the towbar can include the conversion of the pixels of the ROI into polar coordinates with respect to the towball position 14. The average of all blocks classified as "non-repeating" can be used to get the actual towbar angle 19.
  • the described targetless trailer angle detection does not require a calibration drive which needs to be done correctly in all iterations. Thus, there is no problem of inaccuracy in any of the calibration parameters which may lead to huge inaccurate angle detection measurements. Further, the proposed solution is highly embedded friendly and can be easily ported to any processor. Additionally, the inventive method is highly runtime efficient. It does not need huge memory space and not any persistent memory.
  • the approach does not need a physical measurement of the towball position.
  • camera calibration data is not needed to detect the yaw angle of the trailer.
  • the inventive method does not require learning steps to determine the angular position of the trailer. In summary, it is a very efficient method for determining the angular position of the trailer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Image Analysis (AREA)

Abstract

The method for determining an angular position of a trailer shall be improved. For this purpose a method is proposed containing the step of obtaining a raw image of at least a part of a trailer (10) by means of a rear camera of a towing vehicle. The raw image is divided into blocks. A texture value is determined for each of the blocks. Those blocks (12) are labelled the texture value of which meet a pregiven repetition criterion. Finally, the angular position of the trailer is determined on the basis of the labelled blocks (12).

Description

Determining an angular position of a trailer without target
The present invention relates to a targetless method for determining an angular position of a trailer with respect to a towing vehicle, to which the trailer is coupled, wherein a raw image of at least a part of the trailer is obtained by means of a rear camera of the towing vehicle. Moreover, the present invention relates to an evaluation device for determining an angular position of a trailer with respect to a towing vehicle, to which the trailer is coupled, by a rear camera for the towing vehicle for obtaining a raw image of at least a part of the trailer.
In a vehicle/trailer combination with a vehicle and a trailer, there is the problem that the rear space or the trailer itself is only hardly visible for the driver of the vehicle. The rear space around the vehicle/trailer combination is only visible in fractions by the interior mirror and the two exterior mirrors. Thereby, difficulties arise in particular in reversing or in performing other maneuvers. In particular with small or narrow trailers, which are covered by the towing vehicle and are barely visible via the exterior mirrors, the driver barely has a possibility of recognizing, in which angular position the trailer currently is. Assistance systems for trailer operation also often require the determination of the angular position of the trailer.
Furthermore, there may occur other difficult scenarios encountered during driving the vehicle with a trailer. Specifically, when driving forward at speed, a trailer assistance may be helpful which provides trailer's way detection, or when driving across a slope a trailer slip detection would be helpful. In such scenarios the trailer yaw angle should be detected and the respective status of the trailer with respect to the vehicle should be indicated to the user constantly.
Therefore, it is the aim of the present invention to obtain the angle of the towed trailer with respect to the longitudinal axis of the towing vehicle, to which the trailer is coupled. As soon as the angle is obtained, numerous functions are available for the user, which require utilization of this trailer angle. Thus, a trailer parking assistance system can for example be provided when the reverse gear is engaged. In addition, the detected angle can be used for recognizing escalation of the trailer when the vehicle/trailer combination travels forward with a certain speed. Moreover, the detected angle can be used to recognize slipping of the trailer when the vehicle/trailer combination travels on an inclination. Further, the detected angle can also be used to overlay a trajectory of the trailer on a screen for the user depending on the current angular position for example in reversing. Numerous other possibilities of application for the detected angle are conceivable.
Heretofore, some methods are already available to determine the angle of a trailer with respect to the longitudinal axis of the towing vehicle. Thus, Volkswagen for example provides a camera-based trailer assistant to determine the trailer angle. Moreover, Jaguar Landrover provides a JLR trailer assistant, in which a known target (three black circles on white background) are attached to the trailer as target. The corresponding algorithm then detects the known target.
Ford also provides a trailer assistant, which detects a known target. A checkerboard pattern is there attached to the drawbar of the trailer. The algorithm again detects the known checkerboard pattern.
From the printed matter US 9,085,261 B2, a rearview system with trailer angle detection is known. By a camera directed rearward at the towing vehicle, rearward directed captures are made. If a trailer is coupled to the towing vehicle, the trailer angle is calculated from the captured images of the rearward directed camera by means of a processor. Therein, a known target pattern at the trailer is in particular evaluated.
Moreover, from the printed matter US 2015/01 15571 A1 , a method for visual assistance by a graphic overlay on an image of a reversing camera is known. By this visual assistance, the driver can for example be assisted in reversing to a trailer to head for the drawbar with the tow coupling as exactly as possible. Therein, a camera model is provided to match the camera image in the vehicle coordinates with world coordinates. The method predicts the path of the vehicle corresponding to the current steering angle.
Moreover, the printed matter US 2013/0158863 A1 discloses a prediction of a reversing path of a trailer with the aid of GPS and camera images. Therein, a current position of the vehicle and the trailer is obtained with the aid of a GPS system. The current location of the vehicle/trailer combination and a target position of the vehicle/trailer combination are presented on a screen. Thereto, the reversing path of the trailer is predicted depending on the steering angle of the vehicle and the angle between vehicle and trailer and overlaid on the screen. The object of the present invention is in reliably determining the angle that a trailer occupies with respect to the towing vehicle with as little effort as possible.
According to the invention, this object is solved by a method according to claim 1 as well as an evaluation device according to claim 1 1 . Advantageous further developments of the invention are apparent from the dependent claims.
Accordingly, there is provided a method for determining an angular position of a trailer with respect to a towing vehicle, to which the trailer is coupled. Preferably, the angular position is that angle, the trailer occupies relative to the longitudinal axis of the towing vehicle. A raw image of at least a part of the trailer is obtained by means of a rear camera of the towing vehicle for this method. Thus, a raw image is captured by means of a camera from the towing vehicle rearwardly directed to the trailer. Therein, it is not necessarily required that the entire trailer is depicted in the raw image. Rather, it is for example sufficient if the drawbar (also called towbar) of the trailer or also only a part of the drawbar is contained in the raw image.
The raw image is divided into blocks. These blocks preferably have equal size. For example, the raw image is divided into NxM blocks. Each block may have the size of 8x8 pixels. For each of the blocks a texture value is determined. Thus, a block may be reduced to one pixel representing the texture value. This means that the raw image is transformed to a texture value map (first map).
Those blocks of the raw image or those pixels of the texture value map are labelled which meet a pregiven repetition criterion in the first map. For example, if a texture is repeated rarely or even not, it is an unusual texture. Such rare or unusual texture may represent the towbar of the trailer since it appears just once in the raw image. After labelling the blocks, i.e. the first map is generated, the angular position of the trailer may be determined on the basis of the labelled blocks of the first map. For instance, the centre of the region with the labelled blocks with respect to the position of the towing hook or a respective normal of vehicle may represent the angle of the towbar or the trailer.
The method can be performed targetless, meaning that detection of the trailer can be performed without the user requiring a target sticker. In contrast target based trailer assistant systems require the user to place an easy-to-detect target with a known design on the trailer. This helps the camera to identify the trailer that it is meant to track.
However, this means for a targetless method like the present method that there is one less step required to set-up the system at the beginning, and the targetless method also makes maintenance of the system much easier as there is no sticker there to suffer aging, dirt or damage.
Preferably the blocks of the raw image are grouped into (horizontal) slices and the labelling of the blocks is performed on the basis of a histogram of texture of each of the slices. The advantage of dividing the image into slices is that a different threshold can be applied for every cropped portion based on a localized histogram study. Under adverse circumstances, there is a chance to miss some portion of the trailer tow bar with a global histogram threshold. Also slice based texture analysis is capable of handling changes in the surface (e.g. the road can have different types of surfaces)
Another embodiment includes the steps of determining a luminance value of each of the blocks and labelling blocks the luminance value of which meet a pregiven repetition criterion in a second map, wherein the step of determining the angular position of the trailer is performed on the basis of the labelled blocks of the first map and the second map. In other words, beside an analysis of unusual texture features an analysis of unusual luminance features is performed. In a similar way as the texture analysis, the luminance analysis is performed on each of the blocks. A low repetition rate of the luminance values may indicate the towbar of the trailer. Based on these luminance values the second map is generated. The angular position of the towbar or trailer, respectively, can now be obtained from the information of the first map and the second map.
Additionally, there may be performed the step of dividing the first map and/or the second map into sectors and using only those of the sectors for determining the angular position which include the most labelled blocks. Such sectorizing may reduce the amount of blocks to be processed. Specifically, those one or more sectors can be selected for further processing which include the most labelled blocks, i.e. which show a very low repetition of texture values and/or luminance values, for example.
In a further embodiment an edge detection may be performed on a boundary between labelled blocks and unlabeled blocks, wherein the angular position of the trailer is determined on the basis of one or more detected edges. If, for example, the region with the labelled blocks covers the region of the raw image which represents the towbar, the boundaries or edges of this region correspond to the edges of the towbar. Moreover, a pixel level edge refinement may be performed on the one or more detected edges. If the edges are obtained on block basis, the edges are very coarse. Therefore, a refinement of the edges on pixel basis improves the accuracy of the edges tremendously. Consequently, the angular position of the trailer can be performed more accurately, when using edges refined on pixel basis.
In one embodiment a centre of the labelled blocks may be identified for determining the angular position of the trailer. The centre of the labelled blocks may, for example, be determined by using the detected edges. It is easy to identify a centre line of a left edge and a right edge of the region with the labelled blocks, i.e. the towbar, if the middle of the left towbar edge boundary and the corresponding right towbar edge boundary are identified for a couple of pixels towards the towball and a line fitting algorithm is applied.
In a further development the angle of the centre of the region of the labelled blocks with respect to the towball (centre line) is measured versus a pregiven normal for determining the angular position of the trailer. This pregiven normal may be the longitudinal axis of towing vehicle. The difference between this normal and the centre line of the region with the labelled blocks may represent the angle of the towbar or trailer.
Alternatively, the pixels of the centre of the region of labelled blocks may be transferred into polar coordinates for determining the angular position of the trailer. Such
transformation into polar coordinates transforms an angle into a distance which is easier to be detected. Thus, a specific angle of the trailer will be transformed into a certain horizontal position in a polar coordinate map.
As already indicated above, the labelled blocks may represent the towbar of the trailer. A towbar is characterized by a plurality of individual components which result in high texture differences and high luminance differences. Therefore, the towbar can be detected easily with the above described method.
The above described object is also solved by an evaluation device for determining an angular position of a trailer with respect to a towing vehicle, to which the trailer is coupled, including
- a rear camera for the towing vehicle for obtaining a raw image of at least a part of the trailer, and
- a data processing device for
o dividing the raw image into blocks, o determining a texture value for each of the blocks,
o labelling blocks the texture value of which meet a pregiven repetition criterion in a first map and
o determining the angular position of the trailer on the basis of the labelled blocks of the first map.
The advantages and variations of the inventive method as described above also apply to the inventive evaluation device. In this case the method features correspond to respective functional features of the device.
Further features of the invention are apparent from the claims, the figures and the description of figures. The features and feature combinations mentioned above in the description as well as the features and feature combinations mentioned below in the description of figures and/or shown in the figures alone are usable not only in the respectively specified combination, but also in other combinations without departing from the scope of the invention. Thus, implementations are also to be considered as encompassed and disclosed by the invention, which are not explicitly shown in the figures and explained, but arise from and can be generated by separated feature combinations from the explained implementations. Implementations and feature combinations are also to be considered as disclosed, which thus do not have all of the features of an originally formulated independent claim. Moreover, implementations and feature combinations are to be considered as disclosed, in particular by the implementations set out above, which extend beyond or deviate from the feature combinations set out in the relations of the claims.
The attached drawings show in:
Fig. 1 a software architecture for a targetless trailer assist system;
Fig. 2 a flowchart of the functionality of determining the angle of the trailer;
Fig. 3 a raw image divided into blocks;
Fig. 4 the raw image divided into slices;
Fig. 5 an example of labelling unusual texture in proceeded slices; Fig. 6 an example of labelling unusual luminance in proceeded slices;
Fig. 7 an example of dividing the image into sectors to find a peak sector with an unusual texture and luminance;
Fig. 8 the image of the trailer with a block level accurate towbar boundary; and
Fig. 9 an actual towbar centre line and a normal to determine the angle of the towbar.
The present invention will now be described in more detail with exemplary embodiments representing preferred examples of the invention.
The proposed method for determining an angular position of a trailer may be used for a trailer assistant system of a vehicle. The method can be described as targetless, since no target sticker is necessary to be attached to the trailer. The proposed method or trailer assist system is a vision based automated feedback solution. The basic idea behind the solution is to segment the trailer towbar which has potential to provide features which are useful to determine the angle of the trailer with respect to the longitudinal axis of the towing vehicle.
In one embodiment the core idea is to use texture differences in an image to determine the position and/or angle of a trailer or component of it in the image. Ideally, any texture based segmentation can be used for the trailer towbar detection. The proposed method may be embedded friendly and can be directly portable on any hardware without effort. A corresponding system will provide better control of the vehicle with the trailer, so that the number of accidents due to various blind spots and instability created due to the trailer can be reduced.
Fig. 1 shows a software architecture for an embodiment of a targetless trailer assist system. The main block of the system is a trailer feedback module 1 . The trailer feedback module 1 receives input data 2, which may be a video frame or a sensor map. The input data are subjected to a block cost metric module 3 of the trailer feedback module 1 . The block cost metric module 3 extracts a texture feature of each block of the image, frame, map etc. The extracted texture feature for each block is analyzed in an unusual texture analysis module 4. In parallel the block cost metric module 3 extracts luminance data for each block of the image, frame, map etc. The respective data are analyzed in an unusual luminance analysis module 5.
Unusual texture features and unusual luminance features obtained from the analysis modules 4 and 5 are used in a feature detection module 6 for detecting a unique feature on the towbar. Thus, those blocks of the image are known which represent the towbar. Based on these blocks a detection of edges of the towbar is performed in edge detection module 7. The resulting edges are used in an angle detector 8 to obtain the trailer yaw angle. The output of the angle detector 8 of the trailer feedback module 1 is an output signal 9 which can be used for controlling the brake, the acceleration or for providing save steering direction change feedback.
Fig. 2 represents a possible flowchart of an exemplary method for determining the angular position of a trailer. The method starts at step S1 . The functionality of the trailer feedback module 1 is defined through some set of configuration parameters. In step S2 the feedback module allocates and initializes necessary resources like memory buffers and data structures. In step S3 configuration parameters are initialized.
In step S4 the incoming video frame 2 is divided into NxM macro blocks (the block size may be 8x8). Furthermore, sets of macro blocks may be grouped in separate slices. Finally, a block cost metric may be calculated for each block in step S4. This step may involve the extraction of texture or luminance, wherein each pixel of a first map is a representation of the texture of a respective block and one pixel of a second map is a representation of the luminance of this block.
In step S5 an unusual texture analysis is performed on each slice. This step involves the extraction of less repeating texture patterns in each slice. Following or parallel step S6 performs an unusual luminance analysis on each slice. This step involves the extraction of less repeating luminance patterns in each slice.
Step S7 performs a texture feature detection of the towbar. This step involves dividing the region of interest (ROI) into sectors around the towball centre and detection of the sector or sectors with peak unusual texture pattern across the sector/sectors. Additionally in step S7 a luminance feature detection is performed on the towbar. Similarly, this step involves dividing ROI into sectors around the towball centre and detection of the sector with the peak unusual luminance pattern across the sector. In step S8 towbar edge detection takes place. This step involves the detection of one or more edges specifically to the left and right of the towbar. In final step S9 detection of a trailer yaw angle is performed based on the towbar centre, for example. The process or the trailer feedback module 1 ends at step S10.
Fig. 3 shows an image of a rear camera of a vehicle towing a trailer 10. The trailer has a towbar 1 1 which couples the trailer 10 to the vehicle (not shown). For further processing the image is divided into equal sized blocks. The number of blocks may be configurable (NxM). Ideally N and M are set to multiples of 8 for better embedded performance. Thus, a single block has the size of 8x8 pixels.
In accordance with Fig. 4 the blocks may be divided into equally sized configurable slices i.e. the slices have essentially the same extension within a specified region. In the example of Fig. 4 six horizontal slices sl1 to sl6 are defined in the centre region of the image. The blocks are not shown for the sake of clarity. The slices may have any form. The horizontal slices as shown in Fig. 4 are efficient for unusual pattern studies. The slices may be used to mask regions outside a predefined ROI (in Figs. 3 to 9 the ROI is below the semicircle) which will not be processed. Specifically, each column of blocks of the ROI may be split into a number of slices. There may be a configuration parameter to select the number of actual slices to be processed.
A block cost metric may be calculated for each block in order to simplify the image. Ideally, any block cost metric can be used for high level block based texture study. The block cost metric may determine block differences with respect to neighbour blocks. This step will extract a texture value for each block and optionally also a peak repeating luminance value of each block. The resulting two smaller maps (first map and second map) are further analyzed for the trailer towbar detection.
An unusual texture analysis (compare step S5) is performed on the first map with the texture values. This analysis may mainly involve the following steps:
Determining a histogram of texture for each slice.
Extract peak histogram bin index and value.
Apply a threshold defining unusual texture as x % (configurable parameter) of peak histogram bin values.
Create a binary map of histogram bins laying below the threshold as repeating (0) or non-repeating pattern (1 ). Lable all blocks 12 with non-repeating patterns as unusual texture.
These blocks will be further analyzed and classified.
As can be seen in Fig. 5 the unusual texture analysis was limited to slices sH , sl2 and sl3. The processing of the other slices would not be effective for determining the angular position of the towbar.
Similarly, an unusual luminance analysis may be performed as shown in Fig. 6. This analysis may involve the following steps:
Determining a histogram of luminance for each slice.
Extracting a peak histogram bin index and value.
Applying a threshold for determining unusual luminance as x % (configurable parameter) of a peak histogram bin value.
Create a binary map (second map) of histogram bins lower than a pregiven threshold as repeating (0) or non-repeating patterns (1 ).
Lable all blocks with non-repeating patterns as unusual luminance.
These blocks will be further analyzed and classified.
Fig. 6 shows blocks 13 of unusual luminance. Again, only slices sH to sl3 are analyzed. The non-repeating blocks 13 mainly correspond with the towbar 1 1 .
As a further optional step a unique feature detection is performed on the towbar 1 1 as shown in Fig. 7. This unique feature detection may involve the following steps:
Splitting the ROI into a number of sectors sc1 to sc5 (configurable) with respect to the towball centre 14.
Each sector size is equal to 180°divided by the number of sectors.
Find sectors with peak unusual patterns for both texture (first map) and luminance (second map).
Label these unique blocks separately.
As a next step a towbar edge detection may be performed. This edge detection may involve the detection of a towbar edge boundary 15 to the left of the towball and a towbar edge boundary 16 to the right of the towball as shown in Fig. 8. Optionally a pixel level edge refinement may be done on edge blocks later to get an accurate pixel level boundary of the towbar on both sides. For obtaining a pixel level boundary a simple gradient check on the block level boundary and the neighbours can be performed. The pixel level towbar boundaries can be used to determine the towbar centre. For example, an average of the left towbar edge boundary 15 and the corresponding right towbar edge boundary 16 may be used for each pixel line. Subsequently, a line fitting algorithm can be used to find the exact centre line of the towbar 1 1 .
Finally, the trailer yaw angle detection has to be performed, i.e. the angular position of the trailer or towbar, respectively, has to be determined as shown in Fig. 9. The angle detection can be performed by measuring the angle 19 of the trailer centre line 17 with respect to a normal 18. The normal is representing an angle of 0°of the trailer with respect to the towing vehicle.
Another method for determining the angular position of the towbar can include the conversion of the pixels of the ROI into polar coordinates with respect to the towball position 14. The average of all blocks classified as "non-repeating" can be used to get the actual towbar angle 19.
The described targetless trailer angle detection does not require a calibration drive which needs to be done correctly in all iterations. Thus, there is no problem of inaccuracy in any of the calibration parameters which may lead to huge inaccurate angle detection measurements. Further, the proposed solution is highly embedded friendly and can be easily ported to any processor. Additionally, the inventive method is highly runtime efficient. It does not need huge memory space and not any persistent memory.
Furthermore, the approach does not need a physical measurement of the towball position. Moreover, camera calibration data is not needed to detect the yaw angle of the trailer. Finally, the inventive method does not require learning steps to determine the angular position of the trailer. In summary, it is a very efficient method for determining the angular position of the trailer.

Claims

Claims
1 . Method for determining an angular position of a trailer (10) with respect to a towing vehicle, to which the trailer (10) is coupled, by
- obtaining a raw image of at least a part of the trailer (10) by means of a rear camera of the towing vehicle,
characterized by
- dividing the raw image into blocks,
- determining a texture value for each of the blocks,
labelling blocks (12) the texture value of which meet a pregiven repetition criterion in a first map and
- determining the angular position (19) of the trailer on the basis of the labelled blocks (12) of the first map.
2. Method according to claim 1 ,
characterized by
grouping the blocks of the raw image into horizontal slices (sl1 to sl6) and performing the labelling of the blocks on the basis of a histogram of texture of each of the slices (sl1 to sl6).
3. Method according to claim 1 or 2,
characterized by
determining a luminance value for each of the blocks and
labelling blocks (13) the luminance value of which meet a pregiven repetition criterion in a second map, wherein the step of determining the angular position (19) of the trailer (10) is performed on the basis of the labelled blocks of the first map and the second map.
4. Method according to one of the preceding claims,
characterized by dividing the first map and/or the second map into sectors (sc1 to sc5) and using only those of the sectors (sc1 to sc5) for determining the angular position (19) which include the most labelled blocks.
5. Method according to one of the preceding claims,
characterized by
performing an edge detection on a boundary of a region of the labelled blocks and a region of unlabeled blocks, wherein the angular position (19) of the trailer (10) is determined on the basis of one or more detected edges (15, 16).
6. Method according to claim 5,
characterized by
a pixel level edge refinement is performed on the one or more detected edges (15, 16).
7. Method according to one of the preceding claims,
characterized by
determining a centre of the labelled blocks for determining the angular position (19) of the trailer (10).
8. Method according to claim 7,
characterized by
measuring an angle of the centre (17) with respect to a pregiven normal (18) for determining the angular position (19) of the trailer (10).
9. Method according to claim 7,
characterized by
transforming the labelled blocks (12, 13) into polar coordinates for determining the angular position (19) of the trailer (10).
10. Method according to one of the preceding claims,
characterized in that
the labelled blocks (12, 13) represent the towbar (1 1 ) of the trailer (10). Evaluation device for determining an angular position (19) of a trailer (10) with respect to a towing vehicle, to which the trailer (10) is coupled, including
- a rear camera for the towing vehicle for obtaining a raw image of at least a part of the trailer (10),
characterized by
- a data processing device for
o dividing the raw image into blocks,
o determining a texture value for each of the blocks,
o labelling blocks (12) the texture value of which meet a pregiven repetition criterion in a first map and
o determining the angular position (19) of the trailer on the basis of the labelled blocks (12) of the first map.
PCT/EP2018/054275 2017-02-21 2018-02-21 Determining an angular position of a trailer without target WO2018153915A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102017103540.0 2017-02-21
DE102017103540.0A DE102017103540A1 (en) 2017-02-21 2017-02-21 Determine an angular position of a trailer without a target mark

Publications (1)

Publication Number Publication Date
WO2018153915A1 true WO2018153915A1 (en) 2018-08-30

Family

ID=61274258

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2018/054275 WO2018153915A1 (en) 2017-02-21 2018-02-21 Determining an angular position of a trailer without target

Country Status (2)

Country Link
DE (1) DE102017103540A1 (en)
WO (1) WO2018153915A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018123250A1 (en) * 2018-09-21 2020-03-26 Connaught Electronics Ltd. Method and device for tracking a trailer
WO2021069289A1 (en) 2019-10-11 2021-04-15 Connaught Electronics Ltd. Determining a trailer orientation
DE102020109598A1 (en) 2020-04-07 2021-10-07 Connaught Electronics Ltd. System and method for tracking a coupled vehicle

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018120966A1 (en) * 2018-08-28 2020-03-05 Connaught Electronics Ltd. Method for recognizing a part of a trailer and trailer detection system for a towing vehicle
DE102018121265A1 (en) * 2018-08-31 2020-03-05 Connaught Electronics Ltd. Method for determining an angle for determining a pose of a trailer
DE102018121867A1 (en) * 2018-09-07 2020-03-12 Connaught Electronics Ltd. Method and device for identifying a template of a trailer drawbar
DE102019206985A1 (en) * 2019-05-14 2020-11-19 Robert Bosch Gmbh Method for determining an operating angle between a tractor and a trailer of the tractor

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011113191A1 (en) * 2011-09-10 2013-03-14 Volkswagen Aktiengesellschaft Method for determining initial angle between e.g. passenger car and associated trailer, involves evaluating acquired image by image data analysis such that angle between vehicle and trailer is determined based on image data analysis
DE102011120814A1 (en) * 2011-12-10 2013-06-13 Volkswagen Aktiengesellschaft Method for determining angle between towing vehicle e.g. car and trailer, involves determining color information for multitude of fields of detected image comprising drawbar movement between towing vehicle and trailer
US20130158863A1 (en) 2011-12-20 2013-06-20 Continental Automotive Systems, Inc. Trailer backing path prediction using gps and camera images
US20150115571A1 (en) 2013-10-24 2015-04-30 GM Global Technology Operations LLC Smart tow
US9085261B2 (en) 2011-01-26 2015-07-21 Magna Electronics Inc. Rear vision system with trailer angle detection

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9085261B2 (en) 2011-01-26 2015-07-21 Magna Electronics Inc. Rear vision system with trailer angle detection
DE102011113191A1 (en) * 2011-09-10 2013-03-14 Volkswagen Aktiengesellschaft Method for determining initial angle between e.g. passenger car and associated trailer, involves evaluating acquired image by image data analysis such that angle between vehicle and trailer is determined based on image data analysis
DE102011120814A1 (en) * 2011-12-10 2013-06-13 Volkswagen Aktiengesellschaft Method for determining angle between towing vehicle e.g. car and trailer, involves determining color information for multitude of fields of detected image comprising drawbar movement between towing vehicle and trailer
US20130158863A1 (en) 2011-12-20 2013-06-20 Continental Automotive Systems, Inc. Trailer backing path prediction using gps and camera images
US20150115571A1 (en) 2013-10-24 2015-04-30 GM Global Technology Operations LLC Smart tow

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
PHILIPPE ET AL: "Texture Segmentation", 24 May 2016 (2016-05-24), XP055471409, Retrieved from the Internet <URL:https://miac.unibas.ch/SIP/pdf/SIP-08-Texture.pdf> [retrieved on 20180430] *
XIE X ET AL: "A Galaxy of Texture Features", 1 October 2008, 20081001, PAGE(S) 375 - 406, ISBN: 978-1-84816-115-3, XP008122517 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018123250A1 (en) * 2018-09-21 2020-03-26 Connaught Electronics Ltd. Method and device for tracking a trailer
WO2021069289A1 (en) 2019-10-11 2021-04-15 Connaught Electronics Ltd. Determining a trailer orientation
DE102019127478A1 (en) * 2019-10-11 2021-04-15 Connaught Electronics Ltd. Determine a trailer orientation
DE102020109598A1 (en) 2020-04-07 2021-10-07 Connaught Electronics Ltd. System and method for tracking a coupled vehicle
WO2021204867A1 (en) 2020-04-07 2021-10-14 Connaught Electronics Ltd. A system and method to track a coupled vehicle

Also Published As

Publication number Publication date
DE102017103540A1 (en) 2018-08-23

Similar Documents

Publication Publication Date Title
WO2018153915A1 (en) Determining an angular position of a trailer without target
Wu et al. Lane-mark extraction for automobiles under complex conditions
US10650255B2 (en) Vehicular vision system with object detection
US11270134B2 (en) Method for estimating distance to an object via a vehicular vision system
US10155506B2 (en) Systems and methods for braking a vehicle based on a detected object
CN107273788B (en) Imaging system for performing lane detection in a vehicle and vehicle imaging system
EP3007099B1 (en) Image recognition system for a vehicle and corresponding method
US20130286205A1 (en) Approaching object detection device and method for detecting approaching objects
EP2573708B1 (en) Method and System for Detecting Vehicle Position by Employing Polarization Image
US11691585B2 (en) Image processing apparatus, imaging device, moving body device control system, image processing method, and program product
EP2963634A1 (en) Stereo camera device
JP2008282386A (en) Object detector, object detection method, and object detection program
JP2018073275A (en) Image recognition device
US20200151464A1 (en) Method and device for ascertaining an optical flow based on an image sequence recorded by a camera of a vehicle
Barua et al. An Efficient Method of Lane Detection and Tracking for Highway Safety
JP4704998B2 (en) Image processing device
Dai et al. A driving assistance system with vision based vehicle detection techniques
WO2018088262A1 (en) Parking frame recognition device
CN114332453A (en) Front vehicle detection method suitable for automatic emergency braking system
CN113345035A (en) Binocular camera-based gradient real-time prediction method and system and computer-readable storage medium
CN114730453A (en) Method for detecting a movement state of a vehicle
WO2018088263A1 (en) Parking frame recognition device
US11145041B2 (en) Image processing device and method predicting areas in which to search for parking space delimiting lines
JP7390899B2 (en) lane mark recognition device
Sindhu et al. Detection of mud on camera lens for advance driver assistance system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18707012

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18707012

Country of ref document: EP

Kind code of ref document: A1