WO2022128559A1 - A monitoring arrangement, display enabled device and method of operating a monitoring arrangement - Google Patents

A monitoring arrangement, display enabled device and method of operating a monitoring arrangement Download PDF

Info

Publication number
WO2022128559A1
WO2022128559A1 PCT/EP2021/084340 EP2021084340W WO2022128559A1 WO 2022128559 A1 WO2022128559 A1 WO 2022128559A1 EP 2021084340 W EP2021084340 W EP 2021084340W WO 2022128559 A1 WO2022128559 A1 WO 2022128559A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
observation point
display
view
monitoring
Prior art date
Application number
PCT/EP2021/084340
Other languages
French (fr)
Inventor
Maarten Pennings
Original Assignee
Ams International Ag
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ams International Ag filed Critical Ams International Ag
Publication of WO2022128559A1 publication Critical patent/WO2022128559A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/202Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used displaying a blind spot scene on the vehicle part responsible for the blind spot

Definitions

  • This disclosure relates to a monitoring arrangement , to a display enabled device and to a method of operating a monitoring arrangement .
  • Obstructions which block parts of a scene from sight are a common problem in everyday li fe . While in some situations this may simply be inconvenient , some may have serious implications .
  • vehicles have blind spots which are areas around the vehicle that cannot be directly observed by the driver while at the controls .
  • Blind spots especially in large trucks , are an increasing cause of accidents .
  • a common approach to solving this problem are side view mirrors which can be adj usted in a particular way so the driver can observe the obstructed area .
  • this approach has limits and may cause other disadvantages , like reduced aerodynamics of the vehicle and additional cost .
  • Other approaches involve a dedicated design to reduce the obstructions in the vehicle . It is an obj ect of the present disclosure to provide a monitoring arrangement , a display enabled device and a method of operating a monitoring arrangement with improved blind spot visibility .
  • the following relates to an improved concept in the field of display technology .
  • the following disclosure suggests a monitoring arrangement where a camera system, a structural member and a display are arranged relative to each other with respect to a first observation point .
  • a user such as the driver of a car
  • the structural member such as A-pillars in a car
  • the field of view is obstructed by the structural member, e . g . a pillar in the car, and the driver cannot directly observe a corresponding area around the vehicle (blind spot ) .
  • the proposed concept suggests a monitoring system which may generate an obstruction corrected image of a field of view to be monitored with respect to at least one observation point and display an unobstructed image on a display . This way the obstructed area becomes directly visible , despite being obstructed by means of the structural member .
  • the monitoring system can be based on an active or on a passive concept , or a combination thereof .
  • the active concept may involve an angle monitoring system and/or imaging system which allow for an active and dynamic monitoring of the observation point , e . g . by means of dedicated sensors .
  • the passive concept may involve an array of micro lenses which are arranged on pixels of the display .
  • the micro lenses can be shaped in such a way to display the unobstructed image on the display .
  • an unobstructed image may be generated for more than a single observation point , e . g . for other passengers in a car .
  • the micro lenses can be shaped in such a way that , depending on the angle of viewing, a di f ferent sub-image of the unobstructed image is displayed on the display .
  • a monitoring arrangement comprises a monitoring system, a camera system and a display .
  • the camera system is operable to capture an image of a field of view to be monitored .
  • the display is attached to a structural member .
  • the camera system captures an image of the field of view to be monitored .
  • the monitoring system generates from the captured image an unobstructed image which is representative of the monitored field of view viewed from the observation point . Furthermore , the monitoring system displays the unobstructed image on the display .
  • the camera system may capture images of the field of view to be monitored as a video stream or footage .
  • the monitoring system is operable to generate from the captured images a video stream or footage of unobstructed images .
  • Each of the images of the video are unobstructed images and, thus , representative of the monitored field of view viewed from the observation point .
  • the proposed monitoring arrangement allows an observer, e . g . the driver of a vehicle , to observe a field of view which is obstructed by a structural member when viewed from the observers ' observation point as i f no obstruction is present .
  • the unobstructed image generated by the monitoring system represents the field of view obstructed by the structural member as i f no obstruction were present .
  • a blind spot is made visible and security in a vehicle is improved, for example .
  • a blind spot can be considered an area around a vehicle that cannot be directly observed by the driver while at the controls , under existing circumstances .
  • the structural member can be in the form of an A-pillar, also called a windshield pillar, which obstructs , from the observation point of the driver, a driver' s view of the road .
  • A-pillar also called a windshield pillar
  • Other applications are possible as well , e . g . a building such as an event hall where a line of sight to the stages is blocked by a loudspeaker or pillar, for example .
  • buildings comprising at least one structural member include a hospital , an industrial factory, i . e . locations where there is a need to monitor an otherwise machine or people .
  • the camera system is operable to generate an image of at least one further field of view to be monitored . When viewed from the further observation point , the structural member obstructs the further monitored field of view when the display is visible from said further observation point .
  • the monitoring system In operation, the monitoring system generates from the captured image the unobstructed image which is representative of the monitoring field of view when viewed from the observation point and further is representative of the further monitored field of view viewed from the further observation point .
  • a first and a second observation point can be related to the position of the driver of the vehicle and passenger .
  • the structural member for example the A-pillar of the vehicle , may obstruct a field of view from the driver' s perspective ( first observation point ) and another field of view from the passenger' s perspective ( second observation point ) .
  • the monitoring system adj usts the unobstructed image such that it shows the field of view from the driver' s perspective when viewed from the observation point and the field of view of the passenger when viewed from the second observation point .
  • an unobstructed view can be singulated for both the driver and one or more passengers .
  • This concept can be extended to a larger number of observation points .
  • the monitoring system displays a first sub-image of the unobstructed image on the display when viewed from the ( first ) observation point .
  • the first subimage is representative of the monitored field of view .
  • the monitoring system displays a second sub-image of the unobstructed image on the display when viewed from the further ( second) observation point .
  • the second sub-image is representative of the further monitored field of view .
  • the unobstructed image is separated into the first and second sub-image and the display selectively displays only parts of the unobstructed image into the direction of the first observation point or into the direction of the second observation point .
  • the unobstructed image may comprise more than two-sub-images according to the number of observation points it is supposed to represent or depending on a desired angle of view around a given observation, e . g . to allow for a larger movement of an observer .
  • the monitoring system comprises an array of micro lenses .
  • the micro lenses of the array are aligned with pixels of the display .
  • the micro lenses can also be arranged to compensate for possible optical aberrations , for example when the camera system is of fset with a line of sight from the observation point to the field of view to be monitored .
  • the micro lenses of the array are arranged so as to form a lenticular lens .
  • the lenticular lens is designed to provide di f ferent perspectives of the unobstructed image when viewed from di f ferent viewing angles .
  • the display shows the first sub-image or the second sub-image when viewed from di f ferent directions .
  • the lenticular lens may be arranged on the array so as to show more than two sub-images when viewed from di f ferent directions .
  • the micro lenses are arranged to generate the first sub-image within a first angle of view around the observation point .
  • the micro lenses are arranged to generate the second subimage within a second angle of view around the further observation point .
  • the range of angle is defined as angles with which the observer can see an entire image .
  • the first angle of view allows to observe the unobstructed first sub-image from around the observation point and within the speci fied range .
  • the observer located at the observation point may move within the limits of the first angle of view and may still see the unobstructed first sub-image .
  • an observer located in the further observation point may observe the second sub-image from within the limits of the second angle of view .
  • the angle of view around the observation points may account for a certain amount of movement while still allowing to observe the unobstructed sub-images .
  • the micro lenses are arranged along pixel rows or pixel columns of the display, respectively .
  • a first group of micro lenses generates the first sub-image and a second group of micro lenses generates the second sub-image.
  • the micro lenses may have any optical shape, e.g. other than a lenticular lens.
  • Dedicated micro lenses can be used to generate the first sub-image and the second sub-image.
  • the display is bendable or flexible.
  • the display is attached to the structural member, e.g. along its surface profile. In this way the display clings to the structural member.
  • Examples of a flexible display comprises organic LCD or OLED technology.
  • the monitoring system comprises an image processing system.
  • the image processing system is operable to image process the unobstructed image to be displayed on the display depending on a position of at least one observation point.
  • the image processing may provide the unobstructed image such that when viewed from the observation point and optically altered by the micro lenses, e.g. the lenticular lens, the unobstructed image appears as if no obstruction by the structural element were present. This can be considered to be a passive concept.
  • the image processing may also account for different perspective. Typically, the camera system captures images from a different perspective than the observation point line of sight.
  • the image processing generates the unobstructed image without an adaption to the optics arranged on the display.
  • the unobstructed image may still represent the field of view to be monitored but may include optical distortions or offsets.
  • the image processing system may receive additional data which indicates the position of the observation point ( or changes thereof ) and accounts for these additional data to adj ust the unobstructed image accordingly . This can be supported by additional sensors which are included to the monitoring arrangement . This concept can be considered to be an active concept .
  • the image processing system is operable to correct the unobstructed image for an of fset of the camera system with respect to the observation point and/or the further observation point , and the structural member .
  • a camera can be placed at various positions in a vehicle .
  • the camera system can be attached to the roof of a car or at one or more of the car' s side mirrors .
  • the camera system has a di f ferent orientation and perspective of the monitored field of view with respect to the observation points and the structural members .
  • the image processing system may account for these of fsets and possible distortions in the unobstructed image .
  • the image processing system is operable to image process the unobstructed image to be displayed on the display so as to generate the first and second sub-images .
  • the monitoring system comprises a position monitoring system .
  • the position monitoring system is operable to monitor the position of at least one observation point .
  • the position monitoring system comprises one or more sensors which allow to monitor the position of the observation points or changes thereof .
  • the monitoring system comprises a calibration input to input a position of at least one observation point .
  • the calibration input may be arranged to receive calibration data from a manual input interface and/or further sensors which detect position-dependent data .
  • a method of operating a monitoring arrangement uses a monitoring arrangement which comprises a monitoring system, a camera system, a structural member as well as a display which is attached to the structural member .
  • the method comprises the following steps .
  • the structural member obstructs a field of view to be monitored while the display is visible from said observation point .
  • an image is captured of the field of view to be monitored .
  • an unobstructed image is generated from the captured image to be representative of the monitored field of view when viewed from the observation point .
  • the unobstructed image is displayed on the display .
  • a monitoring arrangement comprises a position monitoring system, a camera system and a display .
  • the display is attached to a structural member .
  • the structural member When viewed from at least one observation point , the structural member obstructs the monitored field of view while the display is visible from said observation point .
  • the camera system captures an image of the field of view to be monitored . From the captured image the position monitoring system generates an unobstructed image which is representative of the monitored field of view viewed from the observation point . Furthermore , the position monitoring system displays the unobstructed image on the display . The position monitoring system actively monitors the monitored field of view and adj usts the unobstructed image accordingly . For example , a driver ( representing the observation point ) may move while being at the controls of a vehicle . The position monitoring system detects the actual position of the observation point and allows to adj ust for changes thereof .
  • the position monitoring system comprises at least one sensor and an image processing system .
  • the at least one sensor generates a monitoring system which is representative of a position of the observation point .
  • the image processing system image processes the unobstructed image to be displayed on the display depending on the monitoring signal . In this way an active concept is implemented and the unobstructed image is constantly adj usted using the monitoring signal .
  • the senor comprises at least one of a further camera system, an optical sensor complemented with an illumination source , a light detection and ranging, LIDAR, sensor, a time of flight , ToF, sensor, a structured light sensor, an active stereo vision sensor, and/or an ultrasonic sensor .
  • the image processing system corrects the unobstructed image for an of fset of the camera system with respect to the observation point and the structural member .
  • the image processing system corrects the unobstructed image for calibration data to be input via a calibration input .
  • the calibration data is indicative of the position of the observation point .
  • the calibration data determines an initial position of the observation point and can be used by the monitoring system to generate the unobstructed image .
  • Calibration data may include information about a driver or passenger, adj ustments of a seat , etc .
  • an interface is connected to the calibration input to manually input the calibration data .
  • a calibration sensor is connected to the calibration input to measure and input the calibration data .
  • a driver or passenger may adj ust their seat in a vehicle by using directional knobs or a j oystick, or the like . These adj ustments can be tabbed and provided to the monitoring system, as they give an indication as to where an observation is located .
  • the image processing system is operable to , by means of image processing, generate the unobstructed image so as to reconstruct the monitored field of view as seen from the observation point without obstruction by the structural member .
  • a display enabled device comprises a host system and a monitoring arrangement according to one or more of the aspects discussed above .
  • the host system comprises a vehicle comprising at least one structural member, in particular an A-pillar .
  • Another example relates to an airplane comprising at least one structural member, in particular a pillar or wall .
  • Yet another example relates to an event hall comprising at least one structural member, in particular a pillar, a wall or a loudspeaker .
  • Further implementations of the method are readily derived from the various implementations and embodiments of the camera system and mobile device , and vice versa .
  • Figure 1A and IB show an example embodiment of a monitoring arrangement
  • Figures 2A to 2C show an example embodiment of a lenticular lens for a monitoring arrangement
  • Figure 3 shows another example embodiment of a monitoring arrangement .
  • FIG. 1A shows an example embodiment of a monitoring arrangement .
  • the monitoring arrangement comprises a monitoring system, a camera system 200 , a structural member 300 and a display 400 .
  • the drawing shows a car, which serves as an example platform for the monitoring system .
  • the proposed monitoring arrangement can be used with a variety of systems ( denoted display enabled devices hereinafter ) .
  • the car provides an example framework to which parts of the monitoring arrangement are connected to .
  • the camera system 200 comprises one or more cameras which can be used to capture an image or a video stream ( of images ) .
  • the camera system comprises one or more image sensors and optics , such as a camera lens .
  • the camera system is mounted to the roof of the car .
  • the camera system can also be mounted elsewhere , e . g . to the sidemirrors , the front or rear bumpers , or to pillars of a car ' s window area, to name but a few ( see Figure IB ) .
  • the camera system should be mounted to a location from where a desired region of interest can be monitored .
  • the camera system provides a field of view which is not only determined by its mounting location but also by its optics (such as a camera lens ) .
  • the camera system can be equipped with lenses to create wide panoramic or hemispherical images , including ultra-wide angle lenses , such as fish-eye lenses , which allow monitoring of a vast area around the car, ranging up to 180 ° .
  • the camera system, including its optics may be secured in dedicated modules , with the modules attached or fitted into the car design .
  • the car, or any other system the monitoring arrangement is used with comprises a number of structural members .
  • Examples include (A, B, C, and D) -pillars in a car, which are the vertical or near vertical supports of the car' s window area .
  • a structural member may be any structure which causes an obstruction, e.g. a blind spot, when viewed from a certain observation point.
  • a blind spot in a vehicle is an area around the vehicle that cannot be directly observed by the driver while at the controls.
  • an area that, due to obstruction by a structural member, cannot be directly observed from an observation point but can be monitored by the camera system is denoted a "field of view to be monitored" or "monitored field of view”.
  • An observation point can be any point in space, however, for practical purposes the observation point is considered a point in space from where an observer, such as a driver or passenger, views a structural member, which - from this observation point - obstructs the "field of view to be monitored" or "monitored field of view".
  • an observation point within the meaning of this disclosure may not be fixed. Rather an observation point may undergo changes, e.g. as driver or passenger move or change their position. Also different drivers or passengers of different size may sit in the car and, thus, define different observation points.
  • the monitoring system is arranged to account for the various observation points, including changes thereof.
  • the field of view captured with camera system may not completely coincide with a "field of view to be monitored".
  • the camera system i.e. one or more cameras, may have an offset with respect to a direct line of sight from the observation point to the structural member.
  • the camera system can be placed to any point from where its field of view comprises at least parts of the field of view to be monitored.
  • the "field of view to be monitored” or “monitored field of view” can be considered a region of interest of the camera system's field of view. This region of interest is determined by the structural member which causes an obstruction when viewed from a certain observation point.
  • the display 400 is attached to a structural member 300.
  • the display can be attached to any structure.
  • the display is attached to a structural member which, when viewed from at least one observation point obstructs the monitored field of view.
  • the display is visible from said observation point.
  • the display is (but need not be) in direct line of sight but may be observable from the observation point.
  • a common rear view camera does not qualify as a proposed monitoring system as the structural member (i.e. back of the car) obstructs a monitored field of view (i.e. area behind the car) while the display is not visible from said observation point (driver's position) .
  • the display is not attached to the structural member but to the instrument panel facing towards the front of the car.
  • the camera system is mounted on top of the roof of a car.
  • the camera system is able to monitor a field of view around the car, indicated in the drawing by a circle.
  • Two observation points OP1, OP2 are defined, one associated with the driver and another one associated with a passenger.
  • the two observation points are spaced apart, e.g. by some 100 cm.
  • the right A-pillar of the car is considered an example to illustrate the general concept.
  • the A-pillar, or generally the "obstructing" structural member can be considered a pivot point for image reconstruction around the obstruction (i.e., the A-pillar in this example) .
  • the A-pillar obstructs a first "field of view to be monitored” FOV1 which is determined by the driver viewing from his position, i.e. a first observation point OP1.
  • This first field of view FOV1 is represented by a set of first lines of sight LSI, LS2 in the drawing. These "lines”, i.e. the "driver's line of sight”, are determined by the frame of the A-pillar as pivot points.
  • the A-pillar obstructs a second "field of view to be monitored” which is determined by the passenger viewing from his position, i.e. a second observation point OP2.
  • This second field of view FOV2 is represented by a set of second lines of sight LS3, LS4 in the drawing. These "lines”, i.e. the "passenger's line of sight”, are determined by the frame of the A-pillar as pivot points .
  • the driver cannot not directly observe the first field of view FOV1 from the observation point OP1 due to obstruction by the structural member (e.g. the A-pillar) .
  • the passenger cannot not directly observe the second field of view FOV2 from the observation point OP2 due to obstruction by the structural member (A-pillar) .
  • the first and second field of view may be different, as indicated in the drawing.
  • the camera system has a field of view on its own which allows for capturing images which comprise both the first and second field of view FOV1, FOV2. This is indicated in the drawing by intersections of the "lines" LSI to LS4 with a circle 210 representing the camera system's field of view.
  • the captured images can be used to generate unobstructed images of a field of view which, when observed from a defined observation point, appear obstructed by a structural member.
  • the monitoring system provides the means to generate, from the captured image (or video stream of captured images) , an unobstructed image (unobstructed video stream) which is representative of a monitored field of view (e.g., FOV1 or FOV2 ) viewed from an observation point (e.g., OP1 or OP2) , such as viewed from the driver's or passenger's perspective.
  • FOV1 or FOV2 a monitored field of view
  • an observation point e.g., OP1 or OP2
  • the monitoring system displays the unobstructed image on the display.
  • the monitoring system comprises active and/or passive components, which may be implemented alone or be combined. Further details will be discussed with respect to Figures 2A to 2C and Figure 3.
  • the captured images could be displayed on the display as they are and would give some indication of what the field of view to be monitored looks like if unobstructed.
  • the images captured from the camera system are typically taken from a different perspective.
  • the monitoring system can be equipped with an image processing system.
  • the image processing system conducts image processing to yield unobstructed images to be displayed on the display depending on a position of at least one observation point, e.g. corrects for perspective and offset of the camera system with respect to one or more observations points.
  • Figures 2A and 2B show an example embodiment of a lenticular lens for a monitoring arrangement.
  • the display comprises an array 101 of micro lenses 102, which are aligned with pixels of the display.
  • the micro lenses 102 of the array 101 are arranged so as to form a lenticular lens 103.
  • the lenticular lens can be molded in a plastic substrate, for example.
  • the drawings show in Figures 2A and Figure 2B show schematic representations from different viewpoints.
  • the lenticular lens comprises a series of lenticules , e . g . cylindrical lenses .
  • the cylindrical lenses are aligned along pixel rows or pixel columns of the display, respectively .
  • the cylindrical lenses are aligned along several pixel rows or pixel columns 401 , 402 of the display .
  • Figure 2A also indicates a first sub-image S I and a second sub-image S2 which are displayed using di f ferent rows of columns of display pixels .
  • the lenticular lens constitutes a passive component of the monitoring system .
  • the lenticular lens is designed so that , when viewed from di f ferent angles , di f ferent images are shown .
  • Figure 2B shows two observation points OP1 and OP2 . Due to the lenticules the first sub-image S I can be viewed from the first observation point OP1 and the second sub-image S2 can be viewed from the first observation point OP2 .
  • the monitoring system in this embodiment generates or prepares the unobstructed image in a way to be displayed using the lenticular lens . This may account for one , two or more observation points .
  • the unobstructed image appears to change or move as the image is viewed from di f ferent angles around a given observation point . This is limited by the so called angle of view of a lenticular which determines the range of angles within which the observer can see the entire unobstructed image .
  • the angle of view is determined by the maximum angle at which a ray can leave the image through the correct lenticule .
  • the lenticular lens is configured to be viewed from the two observation points OP1 and OP2 ( or any other point within an angle of view around the observation points ) .
  • the monitoring system namely the image processing system generates corresponding sub-images of the unobstructed image .
  • a first sub-image is representative of the monitored field of view FOV1 and a second sub-image is representative of the second monitored field of view FOV2 . Due to the lenticular lens the first subimage appears when viewed from the first observation point OP1 and the second sub-image appears when viewed from the second observation point OP2 .
  • the driver sees an unobstructed image of "his" obstructed field of view FOV1 whereas the passenger sees an unobstructed image of "his" obstructed field of view FOV2 .
  • the displayed sub-images can be viewed within the angle of view of the lens .
  • the first sub-image is displayed within a first angle of view around the observation point OP1 and the second sub-image is displayed within a second angle of view around the further observation point OP2 .
  • the driver and passenger may move within a range defined by the lenticular lens yet still see the unobstructed image .
  • the display may not be flat but bendable or flexible to better fit to a surface profile of a structural member, such as the A-pillar ( see Figure 2C ) .
  • a structural member such as the A-pillar ( see Figure 2C ) .
  • the design of the lenticular lens may account for this and follow the shape of the bendable display as well .
  • the embodiment is presented here with respect to two operation points . In general , the concept discussed herein applies to any number of observation points .
  • FIG. 2A and 2B The embodiment of Figures 2A and 2B has been shown with two sub-images S I and S2 .
  • the cylindrical lenses of the lenticular lens are aligned along and cover two neighboring pixel rows or pixel columns 401 , 402 of the display, respectively.
  • some degree of changing or moving the observation points is accounted for by the angle of view of the lenses.
  • the degree of changing the observation points can be further extended by using more than two sub-images and align the lenticular lens (or lens elements thereof) along and covering several pixels (or pixel rows or columns) of the display, respectively.
  • each sub-image is generated by the monitoring system for a different angle of view around a given observation point (OP1, for example) .
  • OP1 observation point
  • a first sub-image Sla appears when viewed from the first observation point OP1.
  • This first sub-image Sla is displayed within a first angle of view around the observation point OP1.
  • a second sub-image Sib appears when viewed from a position left of the first observation point OP1, for example when an observer, such as the driver, moved his head from the first observation point OP1 to the left.
  • This second subimage Sib is displayed within a second angle of view centered to the left side around the observation point OP1.
  • sub-images Sic to Sle can be generated for the left and right sides (or even top and bottom sides as will be discussed below) of the observation point OP1.
  • Each sub-image is displayed by defined pixels (or rows/columns ) of the display.
  • the lenses (or lens elements) are arranged on the respective pixels in order to display the sub-images from the respective positions and angles of view around the observation point. This way a larger movement (e.g. due to movement of the observer) can be accounted for.
  • the lenticular lens due to its optical properties, provides a smooth transition between the sub-images. This is possible as a lens element covers a group of pixels.
  • a group of pixels comprises a minimum amount of pixels which are generating the sub-images. For example, in case of five sub-images a lens element may cover a group of five pixels (or rows/columns ) showing the respective five sub-images.
  • This concept can be adapted as the application sees fit.
  • the monitoring system is arranged only for one observer, such as the driver, than the sub-image corresponds to a full image.
  • Each subimage is generated by the monitoring system for a different angle of view around a given observation point (OP1 and OP2, for example) . This way when the driver and/or passenger move their heads further angles are covered.
  • the lenticules are typically organized in columns, which means that the five subimages, discussed as an example above, are seen by moving the head from left to right. This matches well with the verticality of the A-pillar in a car, for example. If it is desired that the display shows unobstructed image changes when the head moves up and down, the lenticules can be arranged in rows, to match a horizontal obstruction. There are not many in a car but this may be beneficial in other applications. In general, the optical design of the lenses, e.g. microlenses, together with a corresponding number of subimages allows for extended directionality, e.g. left, right, top, down. Figure 3 shows another example embodiment of a monitoring arrangement.
  • a monitoring system equipped with only a passive component e.g. the lenticular lens arranged on the display, may have limits.
  • Sub-images and having defined pixels per lens element allow for an extended range of observation and accounts for a certain amount of movement.
  • the limits set by the passive concept may not suffice.
  • An active solution may involve a different approach than generating sub-images for different perspectives.
  • active and passive components can be combined.
  • the monitoring system comprises a position monitoring system 104 which monitors the position of observation points.
  • the position monitoring system provide a monitoring signal which is representative of a position of the observation point.
  • the image processing system receives the monitoring signal and uses this information for image processing the unobstructed image. For example, the image processing system accounts for position and movement of the observation point, e.g. corrects for image distortions relevant for the given observation point.
  • the position monitoring system allows the image processing system to determine the actual position of the observation point rather than relying on a fixed, predetermined position.
  • the position monitoring system can be considered an active component of the monitoring system.
  • the monitoring system comprises passive components, e.g. lenticular lens, and may be complemented with an active component, such as the position monitoring system.
  • monitoring system may just as well only comprise an active component. In both cases the monitoring system is configured to generate the unobstructed image, by means of image processing, so as to reconstruct the monitored field of view as seen from the observation point without obstruction by the structural member .
  • the position monitoring system comprises one or more sensors .
  • the sensor can be implemented based on a variety of concepts , including a further camera system 105 , an optical sensor complemented with an illumination source , a light detection and ranging, LiDAR, sensor, a time of flight , ToF, sensor, a structured light sensor, an active stereo vision sensor, and/or an ultrasonic sensor .
  • the drawing shows one possible implementation in a car .
  • the position monitoring system is implemented into the rearview mirror 106 .
  • the position monitoring system comprises a further camera system 105 .
  • the display is arranged on the A-pillar as structural member .
  • the camera system 105 captures images from the driver ( or any other suitable monitoring signal ) and provides these images to the image processing system .
  • the image processing system determines a position of the driver as observation point , e . g . the eyes of the driver .
  • the unobstructed image is then generated for the determined observation point .
  • the unobstructed image is adj usted for the current perspective and line of sight of the driver .
  • the further camera system may take a continuous footage of the driver and constantly provide images to the image processing system .
  • the camera system essentially functions as an eye tracking device .
  • the image processing system receiving the continuous footage of the driver applies an eye tracking procedure to determine the current observation point .
  • An eye tracking procedure is the process of measuring either the point of gaze .
  • the camera system 105 can be arranged to only detect a single observation point, e.g. the driver.
  • the display may not necessarily be equipped with a passive component such as the lenticular lens. In fact, the display then only needs to display the unobstructed image without any sub-images.
  • the whole image may represent the first monitored field of view FOV1 viewed from the current observation point OP1.
  • the display may indeed be equipped with a passive component such as the lenticular lens, e.g. to allow for further observation points of one or more passengers.
  • the camera system 105 can be arranged to detect a number of observation points and provide images (or footage) of these to the image processing system. The number is only limited by the optics which provides respective sub-images of the observation points to the one or more passengers.
  • the image processing system may not generate subimages but only one unobstructed image to be displayed at the display. Thus, there may be no lenticular lens.
  • the image is generated to create an unobstructed view of the field to be monitored (e.g. FOV1 ) .
  • the image processing system accounts for position and movement, e.g. including image distortions, for the observer based on eye position detection and eye tracking, for example.
  • the image processing system may generate two sub-images for each observer, which are adapted based on eye tracking of driver and passenger.
  • a lenticular lens is arranged on the display and covers two pixels (or rows/columns ) per lens to display either one of the sub-images.
  • the monitoring system may include additional data, such as calibration data to alter the unobstructed image.
  • data can be included from calibration received from a seat position sensor.
  • Data can be input via a calibration input and is indicative of at least an initial position of an observation point.
  • an interface is connected to the calibration input to manually input the calibration data.
  • the proposed monitoring arrangement can be used in a large variety of situations and environments. In general, it can be used everywhere where a structure of some sort obstructs a desired field of view.
  • Any device comprising the proposed monitoring arrangement will be denoted a display enabled device.
  • Such as device comprises a host system and a monitoring arrangement.
  • a vehicle may be any machine that transports people or cargo, which may have a blind spot due to one or more of its structural members.
  • Vehicles include wagons, motor vehicles (cars, trucks, and busses) , railed vehicles (trains, trams) , watercraft (ships, boats) , amphibious vehicles (screw-propelled vehicle, hovercraft) , aircraft (airplanes, helicopters) and spacecraft.
  • the host system may comprise a building, such as an event hall, hospital or an industrial factory comprising at least one structural member ( 300 ) , in particular, a pillar, a wall , a loudspeaker .
  • the monitoring arrangement has been described above with respect to a car and two observation points .
  • the corresponding description and Figures have been used for illustration of the proposed concept .
  • the concepts can be applied to any number of observation points and are not restricted to a car but rather apply to any situation where a structural member may block a field of view to be monitored .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Mechanical Engineering (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

A monitoring arrangement comprises a monitoring system and a camera system (200) operable to capture an image of a field of view (FOV1) to be monitored. A display (400) is attached to a structural member (300). When viewed from at least one observation point (OP1), the structural member (300) obstructs the monitored field of view (FOV1) while the display (400) is visible from said observation point (OP1). The monitoring system is operable to generate from the captured image an unobstructed image being representative of the monitored field of view (FOV1) viewed from the observation point (OP1) and to display the unobstructed image on the display (400).

Description

Description
A MONITORING ARRANGEMENT , DISPLAY ENABLED DEVICE AND METHOD OF OPERATING A MONITORING ARRANGEMENT
Field of disclosure
This disclosure relates to a monitoring arrangement , to a display enabled device and to a method of operating a monitoring arrangement .
This patent application claims the priority of German patent application No . 102020133485 . 0 , the disclosure content of which is hereby incorporated by reference .
Background
Obstructions which block parts of a scene from sight are a common problem in everyday li fe . While in some situations this may simply be inconvenient , some may have serious implications . For example , vehicles have blind spots which are areas around the vehicle that cannot be directly observed by the driver while at the controls . Blind spots , especially in large trucks , are an increasing cause of accidents . A common approach to solving this problem are side view mirrors which can be adj usted in a particular way so the driver can observe the obstructed area . However, this approach has limits and may cause other disadvantages , like reduced aerodynamics of the vehicle and additional cost . Other approaches involve a dedicated design to reduce the obstructions in the vehicle . It is an obj ect of the present disclosure to provide a monitoring arrangement , a display enabled device and a method of operating a monitoring arrangement with improved blind spot visibility .
These obj ectives are achieved by the subj ect matter of the independent claims . Further developments and embodiments are described in the dependent claims .
It is to be understood that any feature described in relation to any one embodiment may be used alone , or in combination with other features described herein, and may be used in combination with one or more features of any other of the embodiments , or any combination of any other of the embodiments unless described as an alternative . Furthermore , equivalents and modi fications not described below may also be employed without departing from the scope of the monitoring arrangement , display enabled device and method of operating a monitoring arrangement which are defined in the accompanying claims .
Summary
The following relates to an improved concept in the field of display technology . The following disclosure suggests a monitoring arrangement where a camera system, a structural member and a display are arranged relative to each other with respect to a first observation point . For example , a user, such as the driver of a car, can be located at the observation point , such that , at least when viewed from said observation point , the structural member, such as A-pillars in a car, obstructs an otherwise unobstructed field of view . In other words , the field of view is obstructed by the structural member, e . g . a pillar in the car, and the driver cannot directly observe a corresponding area around the vehicle (blind spot ) . The proposed concept suggests a monitoring system which may generate an obstruction corrected image of a field of view to be monitored with respect to at least one observation point and display an unobstructed image on a display . This way the obstructed area becomes directly visible , despite being obstructed by means of the structural member .
The monitoring system can be based on an active or on a passive concept , or a combination thereof . The active concept may involve an angle monitoring system and/or imaging system which allow for an active and dynamic monitoring of the observation point , e . g . by means of dedicated sensors . The passive concept may involve an array of micro lenses which are arranged on pixels of the display . For example , the micro lenses can be shaped in such a way to display the unobstructed image on the display . Furthermore , an unobstructed image may be generated for more than a single observation point , e . g . for other passengers in a car . For example , the micro lenses can be shaped in such a way that , depending on the angle of viewing, a di f ferent sub-image of the unobstructed image is displayed on the display .
In at least one embodiment a monitoring arrangement comprises a monitoring system, a camera system and a display . The camera system is operable to capture an image of a field of view to be monitored . The display is attached to a structural member .
In operation, the camera system captures an image of the field of view to be monitored . The monitoring system generates from the captured image an unobstructed image which is representative of the monitored field of view viewed from the observation point . Furthermore , the monitoring system displays the unobstructed image on the display .
Typically, the camera system may capture images of the field of view to be monitored as a video stream or footage . Correspondingly, the monitoring system is operable to generate from the captured images a video stream or footage of unobstructed images . Each of the images of the video are unobstructed images and, thus , representative of the monitored field of view viewed from the observation point .
The proposed monitoring arrangement allows an observer, e . g . the driver of a vehicle , to observe a field of view which is obstructed by a structural member when viewed from the observers ' observation point as i f no obstruction is present . Basically, the unobstructed image generated by the monitoring system represents the field of view obstructed by the structural member as i f no obstruction were present . In this way a blind spot is made visible and security in a vehicle is improved, for example . A blind spot can be considered an area around a vehicle that cannot be directly observed by the driver while at the controls , under existing circumstances .
In a vehicle , the structural member can be in the form of an A-pillar, also called a windshield pillar, which obstructs , from the observation point of the driver, a driver' s view of the road . Other applications are possible as well , e . g . a building such as an event hall where a line of sight to the stages is blocked by a loudspeaker or pillar, for example . Other examples of buildings comprising at least one structural member include a hospital , an industrial factory, i . e . locations where there is a need to monitor an otherwise machine or people . In at least one embodiment the camera system is operable to generate an image of at least one further field of view to be monitored . When viewed from the further observation point , the structural member obstructs the further monitored field of view when the display is visible from said further observation point .
In operation, the monitoring system generates from the captured image the unobstructed image which is representative of the monitoring field of view when viewed from the observation point and further is representative of the further monitored field of view viewed from the further observation point .
This way further observation points , related to passengers , for example , can be included in the monitoring process . The proposed monitoring arrangement allows for generating unobstructed images not only from a single observation point but from further observation points . For example , a first and a second observation point can be related to the position of the driver of the vehicle and passenger . The structural member, for example the A-pillar of the vehicle , may obstruct a field of view from the driver' s perspective ( first observation point ) and another field of view from the passenger' s perspective ( second observation point ) .
The monitoring system adj usts the unobstructed image such that it shows the field of view from the driver' s perspective when viewed from the observation point and the field of view of the passenger when viewed from the second observation point . In this way, an unobstructed view can be singulated for both the driver and one or more passengers . This concept can be extended to a larger number of observation points .
In at least one embodiment the monitoring system displays a first sub-image of the unobstructed image on the display when viewed from the ( first ) observation point . The first subimage is representative of the monitored field of view . In addition, or alternatively, the monitoring system displays a second sub-image of the unobstructed image on the display when viewed from the further ( second) observation point . The second sub-image is representative of the further monitored field of view . In other words , the unobstructed image is separated into the first and second sub-image and the display selectively displays only parts of the unobstructed image into the direction of the first observation point or into the direction of the second observation point . The unobstructed image may comprise more than two-sub-images according to the number of observation points it is supposed to represent or depending on a desired angle of view around a given observation, e . g . to allow for a larger movement of an observer .
In at least one embodiment the monitoring system comprises an array of micro lenses . The micro lenses of the array are aligned with pixels of the display . By choosing optical properties of the micro lenses the displayed unobstructed image can be altered, for example such as to direct the image to the observation point , or further observation points . Furthermore , the micro lenses can also be arranged to compensate for possible optical aberrations , for example when the camera system is of fset with a line of sight from the observation point to the field of view to be monitored . In at least one embodiment the micro lenses of the array are arranged so as to form a lenticular lens . The lenticular lens is designed to provide di f ferent perspectives of the unobstructed image when viewed from di f ferent viewing angles . For example , by way of the lenticular lens the display shows the first sub-image or the second sub-image when viewed from di f ferent directions . Furthermore , the lenticular lens may be arranged on the array so as to show more than two sub-images when viewed from di f ferent directions .
In at least one embodiment the micro lenses are arranged to generate the first sub-image within a first angle of view around the observation point . In addition, or alternatively, the micro lenses are arranged to generate the second subimage within a second angle of view around the further observation point .
The range of angle is defined as angles with which the observer can see an entire image . Thus , the first angle of view allows to observe the unobstructed first sub-image from around the observation point and within the speci fied range . In this way the observer located at the observation point may move within the limits of the first angle of view and may still see the unobstructed first sub-image . Similarly, an observer located in the further observation point may observe the second sub-image from within the limits of the second angle of view . Thus , the angle of view around the observation points may account for a certain amount of movement while still allowing to observe the unobstructed sub-images .
In at least one embodiment the micro lenses are arranged along pixel rows or pixel columns of the display, respectively . A first group of micro lenses generates the first sub-image and a second group of micro lenses generates the second sub-image. The micro lenses may have any optical shape, e.g. other than a lenticular lens. Dedicated micro lenses can be used to generate the first sub-image and the second sub-image.
In at least one embodiment the display is bendable or flexible. For example, the display is attached to the structural member, e.g. along its surface profile. In this way the display clings to the structural member. Examples of a flexible display comprises organic LCD or OLED technology.
In at least one embodiment the monitoring system comprises an image processing system. The image processing system is operable to image process the unobstructed image to be displayed on the display depending on a position of at least one observation point.
For example, the image processing may provide the unobstructed image such that when viewed from the observation point and optically altered by the micro lenses, e.g. the lenticular lens, the unobstructed image appears as if no obstruction by the structural element were present. This can be considered to be a passive concept. The image processing may also account for different perspective. Typically, the camera system captures images from a different perspective than the observation point line of sight.
In another example, the image processing generates the unobstructed image without an adaption to the optics arranged on the display. In this way the unobstructed image may still represent the field of view to be monitored but may include optical distortions or offsets. In yet another example, the image processing system may receive additional data which indicates the position of the observation point ( or changes thereof ) and accounts for these additional data to adj ust the unobstructed image accordingly . This can be supported by additional sensors which are included to the monitoring arrangement . This concept can be considered to be an active concept .
In at least one embodiment the image processing system is operable to correct the unobstructed image for an of fset of the camera system with respect to the observation point and/or the further observation point , and the structural member . For example , a camera can be placed at various positions in a vehicle . For example , the camera system can be attached to the roof of a car or at one or more of the car' s side mirrors . In each case , the camera system has a di f ferent orientation and perspective of the monitored field of view with respect to the observation points and the structural members . The image processing system may account for these of fsets and possible distortions in the unobstructed image .
In at least one embodiment the image processing system is operable to image process the unobstructed image to be displayed on the display so as to generate the first and second sub-images .
In at least one embodiment the monitoring system comprises a position monitoring system . The position monitoring system is operable to monitor the position of at least one observation point . For example , the position monitoring system comprises one or more sensors which allow to monitor the position of the observation points or changes thereof . In at least one embodiment the monitoring system comprises a calibration input to input a position of at least one observation point . The calibration input may be arranged to receive calibration data from a manual input interface and/or further sensors which detect position-dependent data .
In at least one embodiment a method of operating a monitoring arrangement uses a monitoring arrangement which comprises a monitoring system, a camera system, a structural member as well as a display which is attached to the structural member . The method comprises the following steps . When viewed from the at least one observation point , the structural member obstructs a field of view to be monitored while the display is visible from said observation point . Then, using the camera system, an image is captured of the field of view to be monitored . Using the monitoring system, an unobstructed image is generated from the captured image to be representative of the monitored field of view when viewed from the observation point . Finally, the unobstructed image is displayed on the display .
In at least one embodiment a monitoring arrangement comprises a position monitoring system, a camera system and a display . The display is attached to a structural member . When viewed from at least one observation point , the structural member obstructs the monitored field of view while the display is visible from said observation point .
In operation, the camera system captures an image of the field of view to be monitored . From the captured image the position monitoring system generates an unobstructed image which is representative of the monitored field of view viewed from the observation point . Furthermore , the position monitoring system displays the unobstructed image on the display . The position monitoring system actively monitors the monitored field of view and adj usts the unobstructed image accordingly . For example , a driver ( representing the observation point ) may move while being at the controls of a vehicle . The position monitoring system detects the actual position of the observation point and allows to adj ust for changes thereof .
In at least one embodiment the position monitoring system comprises at least one sensor and an image processing system . In operation, the at least one sensor generates a monitoring system which is representative of a position of the observation point . The image processing system image processes the unobstructed image to be displayed on the display depending on the monitoring signal . In this way an active concept is implemented and the unobstructed image is constantly adj usted using the monitoring signal .
In at least one embodiment the sensor comprises at least one of a further camera system, an optical sensor complemented with an illumination source , a light detection and ranging, LIDAR, sensor, a time of flight , ToF, sensor, a structured light sensor, an active stereo vision sensor, and/or an ultrasonic sensor .
In at least one embodiment the image processing system corrects the unobstructed image for an of fset of the camera system with respect to the observation point and the structural member .
In at least one embodiment the image processing system corrects the unobstructed image for calibration data to be input via a calibration input . The calibration data is indicative of the position of the observation point . The calibration data determines an initial position of the observation point and can be used by the monitoring system to generate the unobstructed image . Calibration data may include information about a driver or passenger, adj ustments of a seat , etc .
In at least one embodiment an interface is connected to the calibration input to manually input the calibration data . In addition, or alternatively, a calibration sensor is connected to the calibration input to measure and input the calibration data . For example , a driver or passenger may adj ust their seat in a vehicle by using directional knobs or a j oystick, or the like . These adj ustments can be tabbed and provided to the monitoring system, as they give an indication as to where an observation is located .
In at least one embodiment the image processing system is operable to , by means of image processing, generate the unobstructed image so as to reconstruct the monitored field of view as seen from the observation point without obstruction by the structural member .
In at least one embodiment a display enabled device comprises a host system and a monitoring arrangement according to one or more of the aspects discussed above . The host system comprises a vehicle comprising at least one structural member, in particular an A-pillar . Another example relates to an airplane comprising at least one structural member, in particular a pillar or wall . Yet another example relates to an event hall comprising at least one structural member, in particular a pillar, a wall or a loudspeaker . Further implementations of the method are readily derived from the various implementations and embodiments of the camera system and mobile device , and vice versa .
The following description of figures of example embodiments may further illustrate and explain aspects of the improved concept . Components and parts with the same structure and the same ef fect , respectively, appear with equivalent reference symbols . Insofar as components and parts correspond to one another in terms of their function in di f ferent figures , the description thereof is not necessarily repeated for each of the following figures .
Brief description of the drawings
In the Figures :
Figure 1A and IB show an example embodiment of a monitoring arrangement ,
Figures 2A to 2C show an example embodiment of a lenticular lens for a monitoring arrangement ,
Figure 3 shows another example embodiment of a monitoring arrangement .
Detailed description
Figure 1A shows an example embodiment of a monitoring arrangement . The monitoring arrangement comprises a monitoring system, a camera system 200 , a structural member 300 and a display 400 . Furthermore , the drawing shows a car, which serves as an example platform for the monitoring system . As will be discussed in further detail below the proposed monitoring arrangement can be used with a variety of systems ( denoted display enabled devices hereinafter ) . The car provides an example framework to which parts of the monitoring arrangement are connected to .
The camera system 200 comprises one or more cameras which can be used to capture an image or a video stream ( of images ) .
The camera system comprises one or more image sensors and optics , such as a camera lens . In this example , the camera system is mounted to the roof of the car . However, the camera system can also be mounted elsewhere , e . g . to the sidemirrors , the front or rear bumpers , or to pillars of a car ' s window area, to name but a few ( see Figure IB ) . The camera system should be mounted to a location from where a desired region of interest can be monitored .
The camera system provides a field of view which is not only determined by its mounting location but also by its optics ( such as a camera lens ) . For example , the camera system can be equipped with lenses to create wide panoramic or hemispherical images , including ultra-wide angle lenses , such as fish-eye lenses , which allow monitoring of a vast area around the car, ranging up to 180 ° . The camera system, including its optics , may be secured in dedicated modules , with the modules attached or fitted into the car design .
The car, or any other system the monitoring arrangement is used with, comprises a number of structural members . Examples include (A, B, C, and D) -pillars in a car, which are the vertical or near vertical supports of the car' s window area . A structural member, as understood in the following, may be any structure which causes an obstruction, e.g. a blind spot, when viewed from a certain observation point. A blind spot in a vehicle is an area around the vehicle that cannot be directly observed by the driver while at the controls. In the terms of this disclosure an area that, due to obstruction by a structural member, cannot be directly observed from an observation point but can be monitored by the camera system is denoted a "field of view to be monitored" or "monitored field of view". An observation point can be any point in space, however, for practical purposes the observation point is considered a point in space from where an observer, such as a driver or passenger, views a structural member, which - from this observation point - obstructs the "field of view to be monitored" or "monitored field of view". Furthermore, an observation point within the meaning of this disclosure may not be fixed. Rather an observation point may undergo changes, e.g. as driver or passenger move or change their position. Also different drivers or passengers of different size may sit in the car and, thus, define different observation points. As will be discussed in further detail below the monitoring system is arranged to account for the various observation points, including changes thereof.
The field of view captured with camera system may not completely coincide with a "field of view to be monitored". The camera system, i.e. one or more cameras, may have an offset with respect to a direct line of sight from the observation point to the structural member. In other words, the camera system can be placed to any point from where its field of view comprises at least parts of the field of view to be monitored. Thus, the "field of view to be monitored" or "monitored field of view" can be considered a region of interest of the camera system's field of view. This region of interest is determined by the structural member which causes an obstruction when viewed from a certain observation point.
The display 400 is attached to a structural member 300. In general, the display can be attached to any structure. However, for practical purposes the display is attached to a structural member which, when viewed from at least one observation point obstructs the monitored field of view. At the same time the display is visible from said observation point. For example, the display is (but need not be) in direct line of sight but may be observable from the observation point. A common rear view camera does not qualify as a proposed monitoring system as the structural member (i.e. back of the car) obstructs a monitored field of view (i.e. area behind the car) while the display is not visible from said observation point (driver's position) . In this case the example the display is not attached to the structural member but to the instrument panel facing towards the front of the car.
In this embodiment the camera system is mounted on top of the roof of a car. The camera system is able to monitor a field of view around the car, indicated in the drawing by a circle. Two observation points OP1, OP2 are defined, one associated with the driver and another one associated with a passenger. The two observation points are spaced apart, e.g. by some 100 cm. In the following the right A-pillar of the car is considered an example to illustrate the general concept. The A-pillar, or generally the "obstructing" structural member, can be considered a pivot point for image reconstruction around the obstruction (i.e., the A-pillar in this example) . The A-pillar obstructs a first "field of view to be monitored" FOV1 which is determined by the driver viewing from his position, i.e. a first observation point OP1. This first field of view FOV1 is represented by a set of first lines of sight LSI, LS2 in the drawing. These "lines", i.e. the "driver's line of sight", are determined by the frame of the A-pillar as pivot points. Similarly, the A-pillar obstructs a second "field of view to be monitored" which is determined by the passenger viewing from his position, i.e. a second observation point OP2. This second field of view FOV2 is represented by a set of second lines of sight LS3, LS4 in the drawing. These "lines", i.e. the "passenger's line of sight", are determined by the frame of the A-pillar as pivot points .
As a result the driver cannot not directly observe the first field of view FOV1 from the observation point OP1 due to obstruction by the structural member (e.g. the A-pillar) . The passenger cannot not directly observe the second field of view FOV2 from the observation point OP2 due to obstruction by the structural member (A-pillar) . In general, the first and second field of view may be different, as indicated in the drawing. However, the camera system has a field of view on its own which allows for capturing images which comprise both the first and second field of view FOV1, FOV2. This is indicated in the drawing by intersections of the "lines" LSI to LS4 with a circle 210 representing the camera system's field of view.
The captured images can be used to generate unobstructed images of a field of view which, when observed from a defined observation point, appear obstructed by a structural member. The monitoring system provides the means to generate, from the captured image (or video stream of captured images) , an unobstructed image (unobstructed video stream) which is representative of a monitored field of view (e.g., FOV1 or FOV2 ) viewed from an observation point (e.g., OP1 or OP2) , such as viewed from the driver's or passenger's perspective. Furthermore, the monitoring system displays the unobstructed image on the display. The monitoring system comprises active and/or passive components, which may be implemented alone or be combined. Further details will be discussed with respect to Figures 2A to 2C and Figure 3.
In general, the captured images could be displayed on the display as they are and would give some indication of what the field of view to be monitored looks like if unobstructed. However, the images captured from the camera system are typically taken from a different perspective. In order to account for such different perspective the monitoring system can be equipped with an image processing system. The image processing system conducts image processing to yield unobstructed images to be displayed on the display depending on a position of at least one observation point, e.g. corrects for perspective and offset of the camera system with respect to one or more observations points.
Figures 2A and 2B show an example embodiment of a lenticular lens for a monitoring arrangement. The display comprises an array 101 of micro lenses 102, which are aligned with pixels of the display. The micro lenses 102 of the array 101 are arranged so as to form a lenticular lens 103. The lenticular lens can be molded in a plastic substrate, for example. The drawings show in Figures 2A and Figure 2B show schematic representations from different viewpoints. The lenticular lens comprises a series of lenticules , e . g . cylindrical lenses . The cylindrical lenses are aligned along pixel rows or pixel columns of the display, respectively . In this example , the cylindrical lenses are aligned along several pixel rows or pixel columns 401 , 402 of the display . Figure 2A also indicates a first sub-image S I and a second sub-image S2 which are displayed using di f ferent rows of columns of display pixels .
The lenticular lens constitutes a passive component of the monitoring system . The lenticular lens is designed so that , when viewed from di f ferent angles , di f ferent images are shown . Figure 2B shows two observation points OP1 and OP2 . Due to the lenticules the first sub-image S I can be viewed from the first observation point OP1 and the second sub-image S2 can be viewed from the first observation point OP2 .
The monitoring system in this embodiment generates or prepares the unobstructed image in a way to be displayed using the lenticular lens . This may account for one , two or more observation points . The unobstructed image appears to change or move as the image is viewed from di f ferent angles around a given observation point . This is limited by the so called angle of view of a lenticular which determines the range of angles within which the observer can see the entire unobstructed image . The angle of view is determined by the maximum angle at which a ray can leave the image through the correct lenticule .
In this embodiment the lenticular lens is configured to be viewed from the two observation points OP1 and OP2 ( or any other point within an angle of view around the observation points ) . In order to achieve this , the monitoring system, namely the image processing system generates corresponding sub-images of the unobstructed image . A first sub-image is representative of the monitored field of view FOV1 and a second sub-image is representative of the second monitored field of view FOV2 . Due to the lenticular lens the first subimage appears when viewed from the first observation point OP1 and the second sub-image appears when viewed from the second observation point OP2 . Thus , the driver sees an unobstructed image of "his" obstructed field of view FOV1 whereas the passenger sees an unobstructed image of "his" obstructed field of view FOV2 . Thanks to the optical properties of the lenticular lens the displayed sub-images can be viewed within the angle of view of the lens . The first sub-image is displayed within a first angle of view around the observation point OP1 and the second sub-image is displayed within a second angle of view around the further observation point OP2 . Thus , the driver and passenger may move within a range defined by the lenticular lens yet still see the unobstructed image .
This schematic is intended as an example only . The display may not be flat but bendable or flexible to better fit to a surface profile of a structural member, such as the A-pillar ( see Figure 2C ) . Thus , the design of the lenticular lens may account for this and follow the shape of the bendable display as well . Furthermore , the embodiment is presented here with respect to two operation points . In general , the concept discussed herein applies to any number of observation points .
The embodiment of Figures 2A and 2B has been shown with two sub-images S I and S2 . The cylindrical lenses of the lenticular lens are aligned along and cover two neighboring pixel rows or pixel columns 401 , 402 of the display, respectively. As discussed above, some degree of changing or moving the observation points is accounted for by the angle of view of the lenses. However, the degree of changing the observation points can be further extended by using more than two sub-images and align the lenticular lens (or lens elements thereof) along and covering several pixels (or pixel rows or columns) of the display, respectively.
For example, for a passive solution, there may be five or more sub-images. Each sub-image is generated by the monitoring system for a different angle of view around a given observation point (OP1, for example) . This way, when the driver moves his head further angles are covered. For example, a first sub-image Sla appears when viewed from the first observation point OP1. This first sub-image Sla is displayed within a first angle of view around the observation point OP1. A second sub-image Sib appears when viewed from a position left of the first observation point OP1, for example when an observer, such as the driver, moved his head from the first observation point OP1 to the left. This second subimage Sib is displayed within a second angle of view centered to the left side around the observation point OP1. Further, sub-images Sic to Sle can be generated for the left and right sides (or even top and bottom sides as will be discussed below) of the observation point OP1. Each sub-image is displayed by defined pixels (or rows/columns ) of the display. The lenses (or lens elements) are arranged on the respective pixels in order to display the sub-images from the respective positions and angles of view around the observation point. This way a larger movement (e.g. due to movement of the observer) can be accounted for. Furthermore, when changing or moving the observation point, the lenticular lens, due to its optical properties, provides a smooth transition between the sub-images. This is possible as a lens element covers a group of pixels. A group of pixels comprises a minimum amount of pixels which are generating the sub-images. For example, in case of five sub-images a lens element may cover a group of five pixels (or rows/columns ) showing the respective five sub-images.
This concept can be adapted as the application sees fit. There should at least be one sub-image per observation point. In case the monitoring system is arranged only for one observer, such as the driver, than the sub-image corresponds to a full image. For example, for a passive solution for two observers, there may be ten or more sub-images. Each subimage is generated by the monitoring system for a different angle of view around a given observation point (OP1 and OP2, for example) . This way when the driver and/or passenger move their heads further angles are covered.
The lenticules are typically organized in columns, which means that the five subimages, discussed as an example above, are seen by moving the head from left to right. This matches well with the verticality of the A-pillar in a car, for example. If it is desired that the display shows unobstructed image changes when the head moves up and down, the lenticules can be arranged in rows, to match a horizontal obstruction. There are not many in a car but this may be beneficial in other applications. In general, the optical design of the lenses, e.g. microlenses, together with a corresponding number of subimages allows for extended directionality, e.g. left, right, top, down. Figure 3 shows another example embodiment of a monitoring arrangement. A monitoring system equipped with only a passive component, e.g. the lenticular lens arranged on the display, may have limits. Sub-images and having defined pixels per lens element allow for an extended range of observation and accounts for a certain amount of movement. However, there may be cases where the limits set by the passive concept may not suffice. An active solution may involve a different approach than generating sub-images for different perspectives. Furthermore, active and passive components can be combined.
The monitoring system comprises a position monitoring system 104 which monitors the position of observation points. The position monitoring system provide a monitoring signal which is representative of a position of the observation point. The image processing system receives the monitoring signal and uses this information for image processing the unobstructed image. For example, the image processing system accounts for position and movement of the observation point, e.g. corrects for image distortions relevant for the given observation point. In other words, the position monitoring system allows the image processing system to determine the actual position of the observation point rather than relying on a fixed, predetermined position. Thus, the position monitoring system can be considered an active component of the monitoring system.
In general, the monitoring system comprises passive components, e.g. lenticular lens, and may be complemented with an active component, such as the position monitoring system. However, monitoring system may just as well only comprise an active component. In both cases the monitoring system is configured to generate the unobstructed image, by means of image processing, so as to reconstruct the monitored field of view as seen from the observation point without obstruction by the structural member .
The position monitoring system comprises one or more sensors . The sensor can be implemented based on a variety of concepts , including a further camera system 105 , an optical sensor complemented with an illumination source , a light detection and ranging, LiDAR, sensor, a time of flight , ToF, sensor, a structured light sensor, an active stereo vision sensor, and/or an ultrasonic sensor .
The drawing shows one possible implementation in a car . For better illustration only the driver is shown while sitting at the controls . The position monitoring system is implemented into the rearview mirror 106 . For example , the position monitoring system comprises a further camera system 105 . The display is arranged on the A-pillar as structural member . The camera system 105 captures images from the driver ( or any other suitable monitoring signal ) and provides these images to the image processing system . The image processing system determines a position of the driver as observation point , e . g . the eyes of the driver . The unobstructed image is then generated for the determined observation point . The unobstructed image is adj usted for the current perspective and line of sight of the driver . The further camera system may take a continuous footage of the driver and constantly provide images to the image processing system . The camera system essentially functions as an eye tracking device . The image processing system receiving the continuous footage of the driver applies an eye tracking procedure to determine the current observation point . An eye tracking procedure is the process of measuring either the point of gaze . The camera system 105 can be arranged to only detect a single observation point, e.g. the driver. In this case, the display may not necessarily be equipped with a passive component such as the lenticular lens. In fact, the display then only needs to display the unobstructed image without any sub-images. The whole image may represent the first monitored field of view FOV1 viewed from the current observation point OP1. However, the display may indeed be equipped with a passive component such as the lenticular lens, e.g. to allow for further observation points of one or more passengers. In this case, the camera system 105 can be arranged to detect a number of observation points and provide images (or footage) of these to the image processing system. The number is only limited by the optics which provides respective sub-images of the observation points to the one or more passengers.
In an active monitoring system with just one observer, e.g. the driver, the image processing system may not generate subimages but only one unobstructed image to be displayed at the display. Thus, there may be no lenticular lens. The image is generated to create an unobstructed view of the field to be monitored (e.g. FOV1 ) . The image processing system accounts for position and movement, e.g. including image distortions, for the observer based on eye position detection and eye tracking, for example. For an active solution for two observers, e.g. a driver and a passenger, the image processing system may generate two sub-images for each observer, which are adapted based on eye tracking of driver and passenger. A lenticular lens is arranged on the display and covers two pixels (or rows/columns ) per lens to display either one of the sub-images. These concepts can be extended to multiple observers, including multiple sub-images and lenticular lens covering multiple pixels and having multiple facets .
Apart from the active and passive components discussed so far, the monitoring system may include additional data, such as calibration data to alter the unobstructed image. For example, data can be included from calibration received from a seat position sensor. Data can be input via a calibration input and is indicative of at least an initial position of an observation point. For example, an interface is connected to the calibration input to manually input the calibration data. In other embodiments there may be a dedicated calibration sensor connected to the calibration input to measure and input the calibration data.
The proposed monitoring arrangement can be used in a large variety of situations and environments. In general, it can be used everywhere where a structure of some sort obstructs a desired field of view. Any device comprising the proposed monitoring arrangement will be denoted a display enabled device. Such as device comprises a host system and a monitoring arrangement. A vehicle comprising at least one structural member, in particular, a pillar, constitutes one possible host system. A vehicle may be any machine that transports people or cargo, which may have a blind spot due to one or more of its structural members. Vehicles include wagons, motor vehicles (cars, trucks, and busses) , railed vehicles (trains, trams) , watercraft (ships, boats) , amphibious vehicles (screw-propelled vehicle, hovercraft) , aircraft (airplanes, helicopters) and spacecraft. However, the host system may comprise a building, such as an event hall, hospital or an industrial factory comprising at least one structural member ( 300 ) , in particular, a pillar, a wall , a loudspeaker .
While this speci fication contains many speci fics , these should not be construed as limitations on the scope of the invention or of what may be claimed, but rather as descriptions of features speci fic to particular embodiments of the invention . Certain features that are described in this speci fication in the context of separate embodiments can also be implemented in combination in a single embodiment . Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination . Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination .
For example , the monitoring arrangement has been described above with respect to a car and two observation points . The corresponding description and Figures have been used for illustration of the proposed concept . The concepts can be applied to any number of observation points and are not restricted to a car but rather apply to any situation where a structural member may block a field of view to be monitored .
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results . In certain circumstances , multitasking and parallel processing may be advantageous .
A number of implementations have been described . Nevertheless , various modi fications may be made without departing from the spirit and scope of the invention . Accordingly, other implementations are within the scope of the claims . The following aspects may indicate examples of possible variations or modi fications of the embodiments discussed with respect to the figures .
Reference numerals
101 array of micro lenses
102 micro lenses
103 lenticular lens
104 position monitoring system
105 further camera system
106 rearview mirror
200 camera system
210 field of view of camera system
300 structural member
400 display
401 pixel rows or pixel column
402 pixel rows or pixel column
LS I line of sight
LS2 line of sight
LS3 line of sight
LS4 line of sight
51 sub-image
52 sub-image

Claims

Claims
1. A monitoring arrangement, comprising: a monitoring system, - a camera system (200) operable to capture an image of a field of view (FOV1) to be monitored, and a display (400) attached to a structural member (300) ; wherein : when viewed from at least one observation point (OP1) , the structural member (300) obstructs the monitored field of view (FOV1) while the display (400) is visible from said observation point (OP1) , and the monitoring system is operable to generate from the captured image an unobstructed image being representative of the monitored field of view (FOV1) viewed from the observation point (OP1) and to display the unobstructed image on the display (400) .
2. The monitoring arrangement according to claim 1, wherein: - the camera system (200) is operable to generate an image of at least one further field of view (FOV2) to be monitored, when viewed from the further observation point (OP2) , the structural member (300) obstructs the further monitored field of view (FOV2) while the display (400) is visible from said further observation point (OP2) , the monitoring system is operable to generate from the captured image the unobstructed image being representative of the monitored field of view (FOV1) when viewed from the observation point (OP1) and being representative of the further monitored field of view (FOV2) viewed from the further observation point (OP2) .
3. The monitoring arrangement according to claim 2, wherein the monitoring system is operable to: display a first sub-image (SI) of the unobstructed image on the display (400) when viewed from the observation point (OP1) , the first sub-image being representative of the monitored field of view (FOV1) , and/or display a second sub-image (S2) of the unobstructed image on the display (400) when viewed from the further observation point (OP2) , the second sub-image being representative of the further monitored field of view (FOV2) .
4. The monitoring arrangement according to one of claims 1 to 3, wherein the monitoring system comprises an array (101) of micro lenses (102) , wherein the micro lenses (102) of the array (101) are aligned with pixels (401) of the display (400) .
5. The monitoring arrangement according to one of claims 1 to
4, wherein the micro lenses (102) of the array (101) are arranged so as to form a lenticular lens (103) .
6. The monitoring arrangement according to claim 4 or 5, wherein the micro lenses (102) are arranged to: generate the first sub-image (SI) within a first angle of view around the observation point (OP1) , and/or generate the second sub-image (S2) within a second angle of view around the further observation point (OP2) .
7. The monitoring arrangement according to one of claims 4 to 6, wherein the micro lenses are arranged along pixel rows or pixel columns of the display (400) , respectively, and a first group of micro lenses is operable to generate the first sub-image (SI) and a second group of micro lenses is operable to generate the second sub-image (S2) .
8. The monitoring arrangement according to one of claims 1 to
7, wherein the display (400) is bendable.
9. The monitoring arrangement according to one of claims 1 to
8, wherein the monitoring system comprises an image processing system which is operable to image process the unobstructed image to be displayed on the display (400) depending on a position of at least one observation point (OP1, OP2) .
10. The monitoring arrangement according to claim 9, wherein the image processing system is operable to correct the unobstructed image for an offset of the camera system (200) with respect to the observation point and/or the further observation point (OP2) and the structural member (300) .
11. The monitoring arrangement according to claim 9 or 10, wherein the image processing system is operable to image process the unobstructed image on the display (400) so as to generate the first and second sub-images.
12. The monitoring arrangement according to one of claims 1 to 11, wherein the monitoring system comprises a position monitoring system (104) which is operable to monitor the position of at least one observation point (OP1, OP2) .
13. The monitoring arrangement according to one of claims 1 to 12, wherein the monitoring system comprises a calibration input (105) to input a position of at least one observation point (OP1 , OP2 ) .
14. A method of operating a monitoring arrangement, wherein the monitoring arrangement comprises a monitoring system, and a camera system (200) , a structural member (300) and a display (400) attached to the structural member, wherein when viewed from at least one observation point (OP1) , the structural member (300) obstructs a field of view (FOV1) to be monitored while the display (200) is visible from said observation point (OP1) , the method comprising the steps of: using the camera system (200) , capturing an image of the field of view (FOV1) to be monitored, and using the monitoring system, generating from the captured image an unobstructed image representative of the monitored field of view (FOV1) when viewed from the observation point (OP1) and display the unobstructed image on the display (400) .
15. A monitoring arrangement, comprising: an position monitoring system (104) , and a camera system (200) operable to capture an image of a field of view (FOV1) to be monitored, a display (400) attached to a structural member (300) ; wherein : when viewed from at least one observation point (OP1) , the structural member (300) obstructs the monitored field of view (FOV1) while the display (400) is visible from said observation point (OP1) , and the position monitoring system (104) is operable to generate from the captured image an unobstructed image being representative of the monitored field of view (FOV1) viewed from the observation point (OP1) and to display the unobstructed image on the display (400) .
16. The monitoring arrangement according to claim 15, wherein : the position monitoring system (104) comprises at least one sensor (106) which is operable to generate a monitoring signal representative of a position of the observation point (OP1) ; and the position monitoring system (104) further comprises: an image processing system operable to image process the unobstructed image to be displayed on the display (400) depending on the monitoring signal.
17. The monitoring arrangement according to claim 16, wherein the sensor comprises at least one of: a further camera system (105) , an optical sensor complemented with an illumination source, a light detection and ranging, LiDAR, sensor, a time of flight, ToF, sensor, a structured light sensor, an active stereo vision sensor, and/or an ultrasonic sensor.
18. The monitoring arrangement according to claim 16 or 17, wherein the image processing system is operable to correct the unobstructed image for an offset of the camera system (200) with respect to the observation point and the structural member (300) .
19. The monitoring arrangement according to one of claims 16 to 18, wherein the image processing system is operable to correct the unobstructed image for calibration data to be input via a calibration input, wherein the calibration data is indicative of the position of the observation point (OP1) .
20. The monitoring arrangement according to claim 19, further comprising : an interface connected to the calibration input to manually input the calibration data, and/or a calibration sensor connected to the calibration input to measure and input the calibration data.
21. The monitoring arrangement according to one of claims 16 to 20, wherein the image processing system is operable to, by means of image processing, generate the unobstructed image so as to reconstruct the monitored field of view (FOV1) as seen from the observation point (OP1) without obstruction by the structural member (300) .
22. The monitoring arrangement according to one of claims 15 to 21, wherein the display (400) is bendable.
23. A display enabled device comprising a host system and a monitoring arrangement according to one of claims 1 to 13 or according to one of claims 15 to 22, wherein the host system comprises : a vehicle comprising at least one structural member (300) , in particular, a pillar, an air-plane comprising at least one structural member (300) , in particular, a pillar or wall, or a building comprising at least one structural member (300) , in particular, an event hall, a hospital, an industrial factory comprising a pillar, a wall, and/or a loudspeaker .
PCT/EP2021/084340 2020-12-15 2021-12-06 A monitoring arrangement, display enabled device and method of operating a monitoring arrangement WO2022128559A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102020133485 2020-12-15
DE102020133485.0 2020-12-15

Publications (1)

Publication Number Publication Date
WO2022128559A1 true WO2022128559A1 (en) 2022-06-23

Family

ID=79093041

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2021/084340 WO2022128559A1 (en) 2020-12-15 2021-12-06 A monitoring arrangement, display enabled device and method of operating a monitoring arrangement

Country Status (1)

Country Link
WO (1) WO2022128559A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050168695A1 (en) * 2004-01-16 2005-08-04 Kabushiki Kaisha Honda Lock Vehicular visual assistance system
WO2006027563A1 (en) * 2004-09-06 2006-03-16 Mch Technology Limited View enhancing system for a vehicle
US20120026011A1 (en) * 2010-07-29 2012-02-02 Alps Electric Co., Ltd. Driveer vision support system and vehicle including the system
US20150002642A1 (en) * 2013-07-01 2015-01-01 RWD Consulting, LLC Vehicle visibility improvement system
US20180072156A1 (en) * 2016-09-14 2018-03-15 WITH International, Inc. A-Pillar Video Monitor System
WO2018129310A1 (en) * 2017-01-08 2018-07-12 Shanghai Yanfeng Jinqiao Automotive Trim Systems Co. Ltd System and method for image display on vehicle interior component
US20190217780A1 (en) * 2018-01-17 2019-07-18 Japan Display Inc. Monitor display system and display method of the same

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050168695A1 (en) * 2004-01-16 2005-08-04 Kabushiki Kaisha Honda Lock Vehicular visual assistance system
WO2006027563A1 (en) * 2004-09-06 2006-03-16 Mch Technology Limited View enhancing system for a vehicle
US20120026011A1 (en) * 2010-07-29 2012-02-02 Alps Electric Co., Ltd. Driveer vision support system and vehicle including the system
US20150002642A1 (en) * 2013-07-01 2015-01-01 RWD Consulting, LLC Vehicle visibility improvement system
US20180072156A1 (en) * 2016-09-14 2018-03-15 WITH International, Inc. A-Pillar Video Monitor System
WO2018129310A1 (en) * 2017-01-08 2018-07-12 Shanghai Yanfeng Jinqiao Automotive Trim Systems Co. Ltd System and method for image display on vehicle interior component
US20190217780A1 (en) * 2018-01-17 2019-07-18 Japan Display Inc. Monitor display system and display method of the same

Similar Documents

Publication Publication Date Title
RU147024U1 (en) REAR VIEW SYSTEM FOR VEHICLE
KR102071096B1 (en) Rearview imaging systems for vehicle
US8199975B2 (en) System and method for side vision detection of obstacles for vehicles
US20130096820A1 (en) Virtual display system for a vehicle
CN107027329B (en) Stitching together partial images of the surroundings of a running tool into one image
KR101552444B1 (en) Visual system
CN101474981B (en) Lane change control system
US20130155236A1 (en) Camera-mirror system for motor vehicle
US20100033570A1 (en) Driver observation and security system and method therefor
EP1527606B1 (en) Viewing system
US20150085117A1 (en) Driving assistance apparatus
US20130342658A1 (en) Camera system for a motor vehicle
WO2015013311A1 (en) Vehicle imaging system
US10390001B2 (en) Rear view vision system for a vehicle
WO2006064507A2 (en) Vehicle vision system
KR20160034681A (en) Environment monitoring apparatus and method for vehicle
WO2022128559A1 (en) A monitoring arrangement, display enabled device and method of operating a monitoring arrangement
CN109421599B (en) Method and device for adjusting interior rearview mirror of vehicle
JP2018514168A (en) Display device for automobile
US20230007190A1 (en) Imaging apparatus and imaging system
US20240177492A1 (en) Image processing system, image processing method, and storage medium
KR19980056273A (en) Square guard
JP2019038372A (en) Image display device
JP3231104U (en) Imaging equipment for mobile vehicles compatible with radar equipment
US20230311773A1 (en) Display System, Vehicle and Method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21831260

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21831260

Country of ref document: EP

Kind code of ref document: A1