WO2020173570A1 - Imaging device suitable for use in a motor vehicle - Google Patents

Imaging device suitable for use in a motor vehicle Download PDF

Info

Publication number
WO2020173570A1
WO2020173570A1 PCT/EP2019/055029 EP2019055029W WO2020173570A1 WO 2020173570 A1 WO2020173570 A1 WO 2020173570A1 EP 2019055029 W EP2019055029 W EP 2019055029W WO 2020173570 A1 WO2020173570 A1 WO 2020173570A1
Authority
WO
WIPO (PCT)
Prior art keywords
optical fiber
imaging device
sensor
image
light
Prior art date
Application number
PCT/EP2019/055029
Other languages
French (fr)
Inventor
Jordi VILA-PLANAS
Original Assignee
Ficosa Adas, S.L.U.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ficosa Adas, S.L.U. filed Critical Ficosa Adas, S.L.U.
Priority to PCT/EP2019/055029 priority Critical patent/WO2020173570A1/en
Publication of WO2020173570A1 publication Critical patent/WO2020173570A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/12Mirror assemblies combined with other articles, e.g. clocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/12Mirror assemblies combined with other articles, e.g. clocks
    • B60R2001/1223Mirror assemblies combined with other articles, e.g. clocks with sensors or transducers
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/12Mirror assemblies combined with other articles, e.g. clocks
    • B60R2001/1253Mirror assemblies combined with other articles, e.g. clocks with cameras, video cameras or video screens
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/102Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using 360 degree surveillance camera system

Definitions

  • the present invention has its application within the fiber optic sector providing imaging apparatus. More particularly, the present invention can be applied to capture images from the exterior and interior of motor vehicles.
  • the present invention relates to an electronic imaging device for capturing images without needing the use of any lens and using just one or a plurality of fibers.
  • cameras each at different locations, are arranged in or on a motor vehicle (e.g., a car, motorbike).
  • a motor vehicle e.g., a car, motorbike.
  • cameras have the image sensor integrated and this situation makes it difficult to place them in some places of the motor vehicle because the lens has to be pointing outside of the vehicle as well as the image sensor and not far from the bodywork of the car.
  • cameras need space enough to dissipate the temperature produced by the electronic of the cameras.
  • the surrounding vehicle electronics can be adversely affected by this heating of the camera.
  • top-view systems there are usually four cameras (one with front field of view, one with rear field of view and two on each side with a side field of view) which go, in turn, to another ECU (“master ECU” of the vision system) which usually makes an image processing, for example, stiching (joining/stitching) the images of each of the cameras, or doing a top-view, for example, using an IPM (Inverse Perspective Mapping) or other methods. Therefore, if there are ten cameras in a car, for instance, ten image sensors and ten lenses will be needed (and ten ISPs in the current cameras for Top-view).
  • master ECU of the vision system
  • IPM Inverse Perspective Mapping
  • optical fibers One of the main uses of optical fibers is to bring light anywhere, but light from optical fibers is not capable of forming images. It is known that light from optical fibers only forms a pixel after being processed by the receiving unit. Therefore, it is required one single fiber per pixel and it is needed a plurality of optical fibers (bundle) to obtain a single image.
  • the resolution of the images depends of the number of the fibers forming the bundle; for example, if the device has a bundle of 10.000 optical fibers, the maximum resolution of the obtained image will be 0.01 Megapixel.
  • the prior-art technology using the mentioned optical fiber bundle image forming devices acheives a maximum resolution limited to 30k pixels (0.03 MP) aproximately.
  • EP1273516A1 discloses a device for acquisition of optical information that has a receiving unit and optical fibers for detecting the optical information.
  • the optical fibers in this case act as lenses.
  • EP1273516A1 requires that the end of the optical fiber has to be treated or grounded, for example, by cutting their ends (or lens cut) in the desired manner in order to act as a lens.
  • optical fibers used to transmit information through light can be single mode and multi-mode fibers.
  • the main difference between them is the size of the core and the number of modes that can be propagated through them.
  • the diameter of the core of the single mode fiber is smaller than multimode fibers, which allows only one mode of light to propagate, while multi- mode can propagate more than one mode. Consequently, multi-mode fiber carries more much information.
  • the present invention solves the aforementioned problems and overcomes previously explained state-of-art work limitations by providing an electronic imaging device which uses single optical fibers to bring light anywhere, wherein the maximum resolution of the image is limited only by the wavelength of the captured light, but there are no restrictions by the fiber diameter or the number of optical fibers (bundle).
  • the present invention requires only one single image sensor (also known as photo sensor), located at one end of a single or multiple optical fibers, to generate one or multiple video streams. Few methods for characterization of the fiber allow image reconstruction, for example: Interferometry, machine learning (e.g., deep neural networks), etc.
  • the present invention using several optical fibers allows multiple configurations and settings of sensor shape (e.g., panoramic), focal distance, extended Field of View (FOV) or extended camera orientation, mechanical geometry, etc. to achieve a very lensless flexible imaging system with no distortion.
  • sensor shape e.g., panoramic
  • FOV extended Field of View
  • mechanical geometry e.g., mechanical geometry, etc.
  • the system present invention uses one (or more), multi-mode or single-mode, optical fiber which, in turn, includes a first end, a body and a second end.
  • the first end of the fiber focuses on an "open space” to capture light from the exterior / interior of the car.
  • the light passes through the fiber resulting into a pattern speckle on the second end of the fiber.
  • This light goes to the sensor that is located at this second end and which is connected to a data processing unit.
  • the data processing unit can decode the image after one time calibration of the system. Said calibration or image reconstruction is needed to“translate” a beam of light captured and guided by at least one optical fiber into an image“understandable” by the human eye.
  • the calibration of the system is performed (e.g., by interferometry or machine learning or by using both image reconstruction algorithms) only once (in the assembly line of the vehicle, before commercializing it).
  • the image processor can be designed to decode this information to provide images recognizable by the human eye.
  • the final image can be displayed so that a user can act accordingly, e.g. the driver of a motor vehicle can see the exterior or interior of his/her vehicle, or the final image can be further processed for machine vision purposes.
  • the optical fiber is multi-mode a minimum diameter size is required. If its diameter is very small, the optical fiber is "single-mode".
  • the present invention working with single-mode fiber(s) can take images but only in black and white (not in color).
  • the present invention allows a great number of possible applications: driver monitoring, Surrounding View System (SVS), Control Monitoring System (CMS), front view, rear-view mirror system, etc.
  • SVS Surrounding View System
  • CMS Control Monitoring System
  • front view rear-view mirror system
  • rear-view mirror system etc.
  • An aspect of the present invention refers to an electronic imaging device for motor vehicles which comprises at least a (first) optical fiber to carry light from a first end to a second end guided through its core and further comprises at least one sensor, both the at least one optical fiber and sensor to be installed in a motor vehicle.
  • the first end of the optical fiber can capture light from the exterior of the vehicle, or the interior of the vehicle, or from both the exterior and/or interior of the vehicle; that is, the first end of the optical fiber points to an open space (comprising the exterior and/or interior of the vehicle) to capture light from the open space.
  • the second end of the optical fiber is connected to the at least one sensor to receive the captured light, the sensor converting the light into an electric signal.
  • the electronic imaging device comprises an image processor configured to convert the signal from the sensor into an image, which is recognizable by the human eye.
  • the image processor can use trained machine learning or interferometry as an image reconstruction algorithm to get the recognizable image from the electric signal of the sensor.
  • the imaging device may further comprise more than one optical fiber according to a possible embodiment.
  • the additional (second) optical fiber can be connected to the (first) sensor of the first optical fiber or to another (second) sensor, the second sensor (of the second optical fiber) and the first sensor (of the first optical fiber) being both connected to an image processor.
  • the two respective first ends of the first optical fiber and the second optical fiber can both point, in a possible embodiment, to the exterior of the vehicle covering a field of view of 360 degrees.
  • the image processor connected to one sensor or to a plurality of sensors, is configured to convert the light carried to the connected sensor(s) into an image by using an image reconstruction algorithm (e.g. trained machine learning, interferometry or both).
  • the image processor can be placed in the (interior or exterior of the) vehicle, stacked with the sensor in a same housing placed in the vehicle or at a location different from the location of the sensor to which is connected.
  • the image processor e.g., a central processing unit, a data processor or an image signal processor
  • ECU electronice control unit
  • the present invention has a number of advantages with respect to prior art, which can be summarized as follows:
  • the present invention is a lensless image forming system with a clear reduction in cost and size. Therefore, its integration in vehicles becomes easier and its visual impact is also reduced.
  • the present invention solves the particular above-mentioned problems for the use in motor vehicles regarding the large amount of space required by cameras arranged in vehicles, heating generation due to the functioning of these cameras, and the negative impact of electromagnetic waves.
  • the proposed imaging device can be placed wherever it is required in a motor vehicle or any other location, as the fiber can be bent and the sensor is placed at the second end of the fiber. Such flexibility makes it possible to place the sensor inside the motor vehicle or in the exterior part (e.g., in exterior rear view mirrors, bumpers).
  • Optical fibers do not emit any electromagnetic waves, so that their electromagnetic compatibility is given. Optical fibers are thus ideally suited for receiving the optical information (e.g., an image) from any place and for bringing together the collected optical information if the optical fibers are connected to a centrally located receiving unit therewith.
  • the present invention does not need the use of lenses, could reduce the number of sensors up to one and require only one ISP. Taking into account that lenses, sensors and ISPs represent most (80%) of the final camera price, the present invention achieves significant savings in the overall size and price.
  • Using a single sensor to capture several different FOVs also allows the design of the system with a sensor optimized for the display resolution. Moreover, the number or orientation of the fibers can be modified, so that the system can be customized for other requirements.
  • the present invention does not include any optical fiber working as a lens.
  • the end of the optical fiber does not need to be ground (or lens cut) in any particular shape. This is a main advantage with respect to the prior- art solution disclosed in EP1273516, since the present invention allows the end of the fiber has almost any shape.
  • the present invention includes a machine learning or an interforemetry in order to generate an image“understandable” by the human eye. Once the calibration is done (by the machine learning / interferometry) the ends cannot change shape.
  • Figure 1 shows a block diagram of an electronic imaging device for motor vehicles, according to a preferred embodiment of the invention.
  • Figure 2 shows a schematic illustration of the image sensor of the electronic imaging device receiving light from multiple optical fibers, according to a possible embodiment of the invention using two optical fibers.
  • Figure 3 shows a schematic illustration of the image sensor of the electronic imaging device receiving light from from multiple optical fibers, according to another possible embodiment of the invention using a single optical fiber.
  • Figure 4 shows the electronic imaging device with multiple optical fibers installed in a motor vehicle, according to a possible embodiment of the invention.
  • Figure 5 shows the electronic imaging device installed in a motor vehicle, according to another possible embodiment of the invention.
  • Figure 6 shows the electronic imaging device installed in a motor vehicle, according to a further possible embodiment of the invention.
  • Figure 7 shows a rearview mirror for the exterior of the motor vehicle with the image sensor and image processor incorporated in the mirror itself, according to another further possible embodiment of the invention.
  • Figure 1 presents an electronic imaging device or system to be installed in a vehicle (10) according to a possible embodiment of the invention and comprising at least one first optical fiber (100) with a first end (101 ), a core (102) and a second end (103).
  • the first end (101 ) of the first optical fiber (100) points to an open space comprising the exterior (201 ) and/or the interior (202) of the vehicle (10).
  • the light from the open space is captured by the first end (101 ) of the first optical fiber (100) and guided through the core (102) to the second end (103).
  • At least one sensor (300) is connected to the second end (103) of the first optical fiber (100) to transform the captured light to an electrical signal input into an image processor (400) connected to the sensor (300).
  • the image processor (400) is configured to convert the signal from the sensor (300) into an image which is visible and recognizable by the human eye.
  • the sensor (300) is preferrably an image sensor or photo sensor, located in the vehicle (10), e.g. in the (exterior and/or interior) rear view mirrows or in any other place of the vehicle (10).
  • the image can be further processed to have some characteristics modified (e.g., brightness, stitching, etc., overlays).
  • the imaging device uses a single multi-mode optical fiber
  • the first end (101 ) of the fiber (100) reaches the second end (103) at which the sensor (300) is placed.
  • the resolution of the images depends solely on the wavelength of the light captured by the first end (101 ).
  • the placement of this first end (101 ) of the fiber (100) can be in the exterior (201 ) of the motor vehicle (10) or inside (202) the motor vehicle (10) but always pointing to an open space (exterior or interior of the vehicle (10)). That is, in a possible implementation, the first end (101 ) of the fiber (100) can still point to the interior (202) of the vehicle (10) (e.g., pointing to the driver or passengers).
  • the sensor (300) converts incident light to a digital signal and said digital signal may be capable of direct feed into the image processor (400).
  • the present invention can only work with a single optical fiber, either "multi-mode" or not, connected to a single image sensor (300). Additionally, this digital signal coming from the image sensor (300) can be saved or directly sent to the image processor (400) that applies machine learning or interferometry to transform light (in digital signal form) in an image understandable by the human being. Training must be done at the production / assembly plant before using the car.
  • the resolution of the final image is a function of the wavelength of the light captured by the first end (101 ) of the fiber (100).
  • the resolution is proportional to of the wavelenth squared l 2 .
  • the resolution could be proportional to r ⁇ 16/p(NA/l 2 ), where NA is the numerical aperture of the fiber and l the wavelength of the light.
  • Figure 1 shows the image sensor (300) connected to the image processor (400) and, according to a possible embodiment of the invention, the final image obtained by the image processor (400) goes directly to a display (500).
  • Another possible embodiment of the invention is connecting the image processor (400) to a machine vision module, for example implemented in an Electronic Control Unit or ECU (600), which runs a machine learning application; e.g., in an autonomous car, the machine vision module allows the system to analyse the video without displaying it.
  • a machine learning and a machine vision in motor vehicles There is a machine learning and a machine vision in motor vehicles.
  • the "machine learning", or in its absence the "interferometry" is done in the production line and serves to train the system, i.e., the image processor (400) to decode the light captured by the optical fiber (100).
  • the machine vision can be used to recognize vehicles, calculate distances, see lanes, etc.
  • the "machine learning” is carried out in the production / assembly line to convert light into image, while the “machine vision” is used during the driving of the car and serves to recognize vehicles, calculate distances, etc.
  • the Image Processor (400) performs interferometry / machine learning (reconstruction) based on the training received in the production / assembly line of the vehicle (10).
  • the ECU (600) or“master ECU” consists of a controller that performs different operations such as image stitching (e.g., for top-view), overlays (e.g., guidelines for parking), the machine vision module (recognition of objects in the image, such as Line Recognition (LR), but even traffic signs, pedestrian crossing, Blind Spot Detection, etc.).
  • image stitching e.g., for top-view
  • overlays e.g., guidelines for parking
  • the machine vision module recognition of objects in the image, such as Line Recognition (LR), but even traffic signs, pedestrian crossing, Blind Spot Detection, etc.
  • the machine vision module comprises a Line Recognition (LR) module.
  • the machine vision presents a module for recognizing traffic signals.
  • LR Line Recognition
  • the electronic imaging device can comprise one or more image processors (400) and different modules (e.g., the "vision module”) can be in the same processor or in different processors.
  • the Image Processor (400) is any controller that processes the image and the modules can be the following: a decoding module configured to obtain the preprocessed image from the electric signal of the image sensor (300), post processing modules such as machine vision which reprocess the image obtained by the decoding module.
  • a decoding module configured to obtain the preprocessed image from the electric signal of the image sensor (300)
  • post processing modules such as machine vision which reprocess the image obtained by the decoding module.
  • the image obtained by the decoding module can be then directly displayed or a post-processed image, in which information has been extracted or added to said image by post-processing modules, is the one finally displayed.
  • the multi-mode fiber (100) used by the proposed imaging device includes a core (102) having features which can be selected from an outer diameter with a size-range of 15 micrometers to several millimeters and different shapes: circular, squared, hexagonal, etc. According to some possible embodiments, some examples for preferred size- ranges of the multi-mode fiber (100) to be used are: 15 micrometers - 2000 micrometers, 15 micrometers - 1040 micrometers, 15 micrometers - 600 micrometers and 15 micrometers - 250 micrometers. Furthermore, the ends (101 , 103) of the fiber (100) can be plane or they can have another shape depending of the requirements.
  • the acceptance cone of the multi-mode fiber can change to be adjusted to the actual requirements of the use case.
  • the acceptance cone determines the minimum angle with which the light can enter into the fiber (100).
  • the half angle is the acceptance angle and it depends only on the refractive index of the refraction of the fiber (100) - core and cladding- and the medium. With a big acceptance angle is possible to obtain images with a bigger field of view at the first end (101 ) of the fiber (100) and it reduces the number of fibers or cameras to obtain images.
  • the imaging device can work using just one single multi-mode fiber (100) but, according to another embodiment, multiple fibers (1 10, 100) can be used to have a complete view of the exterior (201 ) and so improves the driver’s visibility, allowing the driver to have a field of view or FOV (200) selected from a top view (also known as bird’s eye) or a surround view (SVS: Surround View System) of the exterior (201 ) and to help him/her for parking the vehicle (10). Furthermore, this imaging device allows the implementation of a“transparent vehicle”, where the user can see in all directions, including below or over the vehicle, with no blind spots.
  • FOV field of view
  • SVS Surround View System
  • the image sensor (300), whose digital ouput in turn is connected to the image processor (400), is connected to two (or more) optical fibers (100, 1 10).
  • the image sensor (300) is then configured to receive light from several sources, i.e., the light (1 100, 1 1 10) from different optical fibers (100, 1 10), which may or may not have different fields of views (200) with each other.
  • the equivalent resolution for each image from each optical fiber (100, 1 10) is reduced proportionally. Therefore, an image with the maximum resolution is to be obtained, it is better to receive light from a single optical fiber.
  • Figure 3 shows another example of implementation with more than one fiber wherein each optical fiber (100, 1 10) uses the entire image sensor (300) to increase/decrease the size of light (2100, 21 10) received by the image sensor (300).
  • each optical fiber (100, 1 10) uses the entire image sensor (300) to increase/decrease the size of light (2100, 21 10) received by the image sensor (300).
  • the image sensor (300) only receives light from a first optical fiber (100) equivalent to a frame, e.g., frame_0.
  • the image sensor (300) stops receiving light (2100) from the first optical fiber (100) and allows the light input (21 10) from the second optical fiber (1 10).
  • the image sensor (300) stops receiving light (21 10) from the second optical fiber (1 10) and allows the light (2100) input (again) from the first optical fiber (100). And so on. That is, the light input (2100, 21 10) of the different optical fibers (100, 1 10) is alternating.
  • the image sensor (300) can operate at 60 frames per second. Therefore, if there are two optical fibers (100, 1 10), each "field of view" comes out at 30 frames per second.
  • the enabling/disabling of the light passing can be performed by an optical shutter, i.e., a diaphragm.
  • the shutters can be mechanical or optoelectronic that are faster than the mechanic ones.
  • the light (21 10) of the first optical fiber (100) can be decoded according to a first machine learning, while the light (21 10) of the second optical fiber (1 10) can be decoded according to a second machine learning (or interferometry).
  • a different machine learning (or interferometry) algorithm is applied for each fiber. For example, if there are four optical fibers, four different machines learning algorithms are needed, one for each fiber.
  • a control unit can synchronize the optical shutter and the image processor, so that the appropriate machine learning can be used for its corresponding fiber.
  • the first optical fiber (100) is associated with a first machine learning algorithm applied to that fiber, while the second optical fiber (1 10) is associated with a second machine learning algorithm.
  • a determined first area or portion of the image sensor (300) corresponds to the light (1 100) received by the first optical fiber (100), while a second area/portion of the image sensor (300) corresponds to the light (1 1 10) from the second optical fiber (1 10). Therefore, the image sensor (300) is divided into a plurality of areas/portions and there are as many portions as optical fibers, and in addition each optical fiber is associated with its area (portion) of the sensor image, and also each zone (portion) is associated with a different machine learning (or interferometry), having each fiber associated with its machine learning (or interferometry).
  • a control unit is needed to alternate the machine learning when there are two or more optical fibers, as the control unit does not only change the "enabling / disabling" through an optical shutter, but also applies one or another machine learning (or interferometry) according to the optical fiber that is activated. That is, the optical shutter activates the optical fiber but also applies the machine learning (or interferometry) associated with that optical fiber.
  • Said control unit can be independent or be implemented in the image processor (400), or in the "Master ECU" (600).
  • the imaging device comprises a first multi- mode fiber (100) with a first sensor and a second multi-mode optical fiber (1 10), which in turn may be connected to: the first sensor of the first multi-mode fiber, or to a stacked device composed by another sensor and an image processor, or to a second sensor being both (aforementioned first and second) sensors connected to the same image processor (400).
  • the (first and/or second) image sensor (300) can be located in the same housing of image processor (400), e.g., near the motor of the vehicle (10), or at a separated ubication from the one of the image processor (400) to which said image sensor (300) is connected.
  • image processor (400) e.g., near the motor of the vehicle (10)
  • a plurality of (e.g., 16) multi-mode fibers are used to acquire the light (not image) from the exterior (201 ) of the vehicle (10).
  • optical fibers in the vehicle (10) there are several optical fibers in the vehicle (10), connected to the same image sensor (e.g. located behind the dashboard, that is, in the motor of the vehicle (10)); the image sensor (300) connected, in turn, to an image processor (400), e.g., an ISP, and all the optical fibers focusing on points (3000, 3001 , 3002, 3003, 3004, 3005, 3006, 3007, 3008) to obtain a FOV of 360°, as shown in Figure 4.
  • an image processor e.g., an ISP
  • the whole UltraViolet/Visible/lnfraRed range of light can be covered just by changing the optical fiber and/or the sensor used at the second end of the fiber.
  • one single fiber and only one sensor detect these wavelengths and convert this information into electrical signals, which are not images recognizable by the human eye.
  • Another possible embodiment is to have one single fiber and several sensors at its end, each sensor configured to detect one particular range (Ultraviolet, Visible or InfraRed) of wavelengths.
  • Another further possible embodiment is to have several fibers connected to several sensors, one fiber for each range of wavelengths. Current camera lenses have different focal points at different wavelengths, making it difficult to obtain images in the same range.
  • Distal ends of the several multi-mode optical fibers can be lined up to different directions. This means that an object can be viewed from various directions. Likewise, various objects (e.g, obstacles or vehices in the exterior environment of the vehicle (10)) can be observed at the same time in all directions, for example, in stereo vision.
  • the image processor (400) may be a CPU (Central Processing Unit), a data processor or an ISP (Image Signal Processor).
  • the image processor (400) performs at least the decoding of the signal from the optical sensor into images.
  • the image processor (400) of the imaging device can be connected to the ECU (600) of the vehicle (10) which, among other functions, makes a "surrounding view” or a "top-view” in a display for the driver of the vehicle (10).
  • the sensor (300) e.g. located behind the dashboard of the vehicle (10)
  • the image processor (400) can be connected to a sensor (300) or be stacked with the sensor (300) in the same receiving device as shown in Figure 6.
  • One image processor (400) can be connected to one or more sensors.
  • Image processor (400) can be placed inside (202) the motor vehicle (10) or in any exterior (201 ) part of the vehicle (10) like the rear view mirror.
  • the multi-mode optical fiber (100) may be bent as desired, whereby the fiber (100) can occupy any position and can monitor one or more objects any thereby. This allows easier integration in the vehicle (10).
  • storing means e.g., a memory
  • an output of the image sensor (300) to store data extracted from the signal received by the image sensor (300).
  • the signal that comes from the image sensor (300) is saved without being processed by the image processor (400).
  • the storing means can receive the image after reconstruction from the image processor (400).
  • the image generation is centralized in the motor vehicle (10).
  • the sensor (300) and the image processor (400) are both located in the front part of the vehicle (10), near the engine.
  • the sensor (300) and the image processor (400) are located in one door of the vehicle (10); more particularly, the sensor (300) and the image processor (400) can be located in the exterior rear view mirror.
  • the sensor (300) and the image processor (400) can be separated and placed at different locations or they both can be staked in the same device located in the motor vehicle (10).
  • Figure 7 shows another example of possible locations for the sensor (300) and the image processor (400) in a rear view mirror (70) located at the exterior of the vehicle (10) having an image sensor (300) connected to at least one optical fiber, e.g., a first optical fiber (100) and a second optical fiber (1 10), and a control unit acting as an image processor (400).
  • the first optical fiber (100) can be focused substantially rearward of the vehicle (10) acting as a CMS and second optical fiber (1 10) can be focused sensibly downwards for a top-view.
  • the storing means configured either to save data before pre-processing or after image processing can be located in the rear view mirror (70) too.
  • all the the rear view mirrors (70) at the exterior of the vehicle (10) are connected, in such a way that the control unit of at least one mirror can do a "stitching", etc.
  • at least one exterior mirror (70), preferably the two ones of the vehicle (10) can be connected also to the interior mirror.
  • Some existing mirrors / winglets carry two cameras, one focused on the ground for the top-view and one sensibly backwards for the CMS.
  • the proposed embodiments of the invention replace both cameras by optical fibers.
  • all the mirrors of the vehicle (10), interior and / or exterior rear-view mirrors, but not the winglet, can have a display,
  • these rear-view mirrors (in this case, not the winglet) comprise electro- optical means to switch from“mirror mode” to "display mode”.
  • the image processor (400) may comprise an algorithm that includes an interferometric characterization of the fiber allowing image reconstruction. This means that one multi-mode optical fiber (100) may form a complete image (not just one pixel). Note that said characterization of the fiber is performed just once during the system lifetime, after which the image can be formed without any further interferometric fiber characterization.
  • One method for image reconstruction is applying the transmission matrix obtained once with interferometric characterization to the speckle image from the end of the multi-mode fiber (100).
  • a setup composed by a Ne-He laser, two beam splitters, a galvanometric mirror and a multimodal fiber allows obtaining the transmission matrix. The image reconstruction is possible.
  • the image processor (400) uses a trained machine learning to convert the light from the multi-mode optical fiber (100) into an image.
  • light passing through the multi-mode optical fiber (100) may have various forms or patterns formed therein.
  • both ends (101 , 103) of the fiber (100) can include a lens to transform naturally diverging light-emission from an optical fiber to a parallel beam of light.
  • the choice of the image reconstruction method strongly depends on the needs of the use case in terms of thermal and mechanical stability, resolution, frame rate and so on.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Studio Devices (AREA)

Abstract

An electronic imaging device for motor vehicles (10), comprising: - at least one first optical fiber (100) which comprises a first end (101), a core (102) and a second end(103), to carry light from the first end (101) to the second end (103) guided through the core (102); - at least one sensor (300), wherein the at least one optical fiber (100) and sensor (300) are installed in a motor vehicle (10), and wherein the first end (101) of the fiber (100) points to an open space comprising the exterior (201) and/or the interior (202) of the motor vehicle (10) to capture light from the open space, and the second end (103) of the fiber (100) is connected to the at least one sensor (300) to receive the captured light; - an image processor (400) connected to the sensor (300) and configured to convert the light received by the sensor (300) into an image by means of an image reconstruction algorithm using machine learning or interferometry or both.

Description

IMAGING DEVICE SUITABLE FOR USE IN A MOTOR VEHICLE
DESCRIPTION
Field of the invention
The present invention has its application within the fiber optic sector providing imaging apparatus. More particularly, the present invention can be applied to capture images from the exterior and interior of motor vehicles.
The present invention relates to an electronic imaging device for capturing images without needing the use of any lens and using just one or a plurality of fibers.
Background of the invention
It is common that cameras, each at different locations, are arranged in or on a motor vehicle (e.g., a car, motorbike). Currently, cameras have the image sensor integrated and this situation makes it difficult to place them in some places of the motor vehicle because the lens has to be pointing outside of the vehicle as well as the image sensor and not far from the bodywork of the car. On the other hand, cameras need space enough to dissipate the temperature produced by the electronic of the cameras. There is also the problem that the surrounding vehicle electronics can be adversely affected by this heating of the camera.
Current cameras in vehicles have at least one lens, and an image sensor that must be coplanar. T raditionally, the cameras did not incorporate an ECU (Electronic Control Unit) but currently for the automotive sector, especially for the top-view systems, an ECU may be incorporated in each camera that has an ISP (Image Signal Processor) connected to the image sensor. Also, currently some cameras for parking, "front view" and CMS (Control Monitoring System), which are used for exterior or interior "rear view mirrors", do not incorporate ECUs, and so the tasks of the ISP are performed, not in the camera itself, but in a "master ECU" of the vision system or directly in an ECU within a display device. In addition, currently (at least in top-view systems) there are usually four cameras (one with front field of view, one with rear field of view and two on each side with a side field of view) which go, in turn, to another ECU (“master ECU” of the vision system) which usually makes an image processing, for example, stiching (joining/stitching) the images of each of the cameras, or doing a top-view, for example, using an IPM (Inverse Perspective Mapping) or other methods. Therefore, if there are ten cameras in a car, for instance, ten image sensors and ten lenses will be needed (and ten ISPs in the current cameras for Top-view).
One of the main uses of optical fibers is to bring light anywhere, but light from optical fibers is not capable of forming images. It is known that light from optical fibers only forms a pixel after being processed by the receiving unit. Therefore, it is required one single fiber per pixel and it is needed a plurality of optical fibers (bundle) to obtain a single image. The resolution of the images depends of the number of the fibers forming the bundle; for example, if the device has a bundle of 10.000 optical fibers, the maximum resolution of the obtained image will be 0.01 Megapixel. As each optical fiber leads to one pixel of the image and the final diameter of the optical fiber bundle dictates the image resolution, the prior-art technology using the mentioned optical fiber bundle image forming devices acheives a maximum resolution limited to 30k pixels (0.03 MP) aproximately.
EP1273516A1 discloses a device for acquisition of optical information that has a receiving unit and optical fibers for detecting the optical information. However, the optical fibers in this case act as lenses. EP1273516A1 requires that the end of the optical fiber has to be treated or grounded, for example, by cutting their ends (or lens cut) in the desired manner in order to act as a lens.
On the other hand, optical fibers used to transmit information through light can be single mode and multi-mode fibers. The main difference between them is the size of the core and the number of modes that can be propagated through them. Considering the light as a wave, when the light wave is guided down a fiber optic, the wave is propagated in modes. The diameter of the core of the single mode fiber is smaller than multimode fibers, which allows only one mode of light to propagate, while multi- mode can propagate more than one mode. Consequently, multi-mode fiber carries more much information. Light passes through the multi-mode fiber with a result of a pattern of speckles impossible to be decoded by the human eye. Therefore, it is highly desirable to provide an imaging device suitable for being placed at any location of a motor vehicle and for capturing image with the highest resolution from the exterior of the motor vehicle.
Summary of the invention
The present invention solves the aforementioned problems and overcomes previously explained state-of-art work limitations by providing an electronic imaging device which uses single optical fibers to bring light anywhere, wherein the maximum resolution of the image is limited only by the wavelength of the captured light, but there are no restrictions by the fiber diameter or the number of optical fibers (bundle).
The present invention requires only one single image sensor (also known as photo sensor), located at one end of a single or multiple optical fibers, to generate one or multiple video streams. Few methods for characterization of the fiber allow image reconstruction, for example: Interferometry, machine learning (e.g., deep neural networks), etc.
The present invention using several optical fibers allows multiple configurations and settings of sensor shape (e.g., panoramic), focal distance, extended Field of View (FOV) or extended camera orientation, mechanical geometry, etc. to achieve a very lensless flexible imaging system with no distortion.
More particularly, the system present invention uses one (or more), multi-mode or single-mode, optical fiber which, in turn, includes a first end, a body and a second end. The first end of the fiber focuses on an "open space" to capture light from the exterior / interior of the car. The light passes through the fiber resulting into a pattern speckle on the second end of the fiber. This light goes to the sensor that is located at this second end and which is connected to a data processing unit. The data processing unit can decode the image after one time calibration of the system. Said calibration or image reconstruction is needed to“translate” a beam of light captured and guided by at least one optical fiber into an image“understandable” by the human eye. The calibration of the system is performed (e.g., by interferometry or machine learning or by using both image reconstruction algorithms) only once (in the assembly line of the vehicle, before commercializing it). As the sensor can detect and convey light in all wavelengths, the image processor can be designed to decode this information to provide images recognizable by the human eye. Thus, the final image can be displayed so that a user can act accordingly, e.g. the driver of a motor vehicle can see the exterior or interior of his/her vehicle, or the final image can be further processed for machine vision purposes.
If the optical fiber is multi-mode a minimum diameter size is required. If its diameter is very small, the optical fiber is "single-mode". The present invention working with single-mode fiber(s) can take images but only in black and white (not in color).
The present invention allows a great number of possible applications: driver monitoring, Surrounding View System (SVS), Control Monitoring System (CMS), front view, rear-view mirror system, etc.
An aspect of the present invention refers to an electronic imaging device for motor vehicles which comprises at least a (first) optical fiber to carry light from a first end to a second end guided through its core and further comprises at least one sensor, both the at least one optical fiber and sensor to be installed in a motor vehicle. The first end of the optical fiber can capture light from the exterior of the vehicle, or the interior of the vehicle, or from both the exterior and/or interior of the vehicle; that is, the first end of the optical fiber points to an open space (comprising the exterior and/or interior of the vehicle) to capture light from the open space. The second end of the optical fiber is connected to the at least one sensor to receive the captured light, the sensor converting the light into an electric signal. And the electronic imaging device comprises an image processor configured to convert the signal from the sensor into an image, which is recognizable by the human eye. The image processor can use trained machine learning or interferometry as an image reconstruction algorithm to get the recognizable image from the electric signal of the sensor.
Optional and additionally, the imaging device may further comprise more than one optical fiber according to a possible embodiment. The additional (second) optical fiber can be connected to the (first) sensor of the first optical fiber or to another (second) sensor, the second sensor (of the second optical fiber) and the first sensor (of the first optical fiber) being both connected to an image processor. The two respective first ends of the first optical fiber and the second optical fiber can both point, in a possible embodiment, to the exterior of the vehicle covering a field of view of 360 degrees.
The image processor, connected to one sensor or to a plurality of sensors, is configured to convert the light carried to the connected sensor(s) into an image by using an image reconstruction algorithm (e.g. trained machine learning, interferometry or both). The image processor can be placed in the (interior or exterior of the) vehicle, stacked with the sensor in a same housing placed in the vehicle or at a location different from the location of the sensor to which is connected. The image processor (e.g., a central processing unit, a data processor or an image signal processor) can be connected to an electronic control unit (ECU) of the vehicle and/or a display of the vehicle, according to other possible embodiments.
The present invention has a number of advantages with respect to prior art, which can be summarized as follows:
There is no restriction on the maximum resolution of the obtained image and a minimum pixel size of 0.5pm can be achieved.
The present invention is a lensless image forming system with a clear reduction in cost and size. Therefore, its integration in vehicles becomes easier and its visual impact is also reduced.
The present invention solves the particular above-mentioned problems for the use in motor vehicles regarding the large amount of space required by cameras arranged in vehicles, heating generation due to the functioning of these cameras, and the negative impact of electromagnetic waves. The proposed imaging device can be placed wherever it is required in a motor vehicle or any other location, as the fiber can be bent and the sensor is placed at the second end of the fiber. Such flexibility makes it possible to place the sensor inside the motor vehicle or in the exterior part (e.g., in exterior rear view mirrors, bumpers). Optical fibers do not emit any electromagnetic waves, so that their electromagnetic compatibility is given. Optical fibers are thus ideally suited for receiving the optical information (e.g., an image) from any place and for bringing together the collected optical information if the optical fibers are connected to a centrally located receiving unit therewith.
Unlike existing solutions in the automotive industry, moving towards autonomous driving systems with an increasing number of cameras per vehicle (each camera requiring an individual lens and sensor, and in most cases an individual image signal processor, ISP), the present invention does not need the use of lenses, could reduce the number of sensors up to one and require only one ISP. Taking into account that lenses, sensors and ISPs represent most (80%) of the final camera price, the present invention achieves significant savings in the overall size and price.
Using a single sensor to capture several different FOVs also allows the design of the system with a sensor optimized for the display resolution. Moreover, the number or orientation of the fibers can be modified, so that the system can be customized for other requirements.
The present invention does not include any optical fiber working as a lens. In fact, the end of the optical fiber does not need to be ground (or lens cut) in any particular shape. This is a main advantage with respect to the prior- art solution disclosed in EP1273516, since the present invention allows the end of the fiber has almost any shape. Furthermore, the present invention includes a machine learning or an interforemetry in order to generate an image“understandable” by the human eye. Once the calibration is done (by the machine learning / interferometry) the ends cannot change shape. Therefore, another advantage is that, if the end is damaged (shape/geometry is changed) another calibration can be done which would already take into account the new geometry of the end of the fiber, instead of replacing all the optical fibers (which would be very expensive, since the replacement of fibers would involve opening the car and removing all the fibers to put the new ones).
These and other advantages will be apparent in the light of the detailed description of the invention.
Description of the drawings For the purpose of aiding the understanding of the characteristics of the invention, according to a preferred practical embodiment thereof and in order to complement this description, the following Figures are attached as an integral part thereof, having an illustrative and non-limiting character:
Figure 1 shows a block diagram of an electronic imaging device for motor vehicles, according to a preferred embodiment of the invention.
Figure 2 shows a schematic illustration of the image sensor of the electronic imaging device receiving light from multiple optical fibers, according to a possible embodiment of the invention using two optical fibers.
Figure 3 shows a schematic illustration of the image sensor of the electronic imaging device receiving light from from multiple optical fibers, according to another possible embodiment of the invention using a single optical fiber.
Figure 4 shows the electronic imaging device with multiple optical fibers installed in a motor vehicle, according to a possible embodiment of the invention.
Figure 5 shows the electronic imaging device installed in a motor vehicle, according to another possible embodiment of the invention.
Figure 6 shows the electronic imaging device installed in a motor vehicle, according to a further possible embodiment of the invention.
Figure 7 shows a rearview mirror for the exterior of the motor vehicle with the image sensor and image processor incorporated in the mirror itself, according to another further possible embodiment of the invention.
Preferred embodiment of the invention
The matters defined in this detailed description are provided to assist in a comprehensive understanding of the invention. Accordingly, those of ordinary skill in the art will recognize that variation changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, description of well-known functions and elements are omitted for clarity and conciseness.
Of course, the embodiments of the invention can be implemented in a variety of architectural platforms, operating and server systems, devices, systems, or applications. Any particular architectural layout or implementation presented herein is provided for purposes of illustration and comprehension only and is not intended to limit aspects of the invention.
Figure 1 presents an electronic imaging device or system to be installed in a vehicle (10) according to a possible embodiment of the invention and comprising at least one first optical fiber (100) with a first end (101 ), a core (102) and a second end (103). The first end (101 ) of the first optical fiber (100) points to an open space comprising the exterior (201 ) and/or the interior (202) of the vehicle (10). The light from the open space is captured by the first end (101 ) of the first optical fiber (100) and guided through the core (102) to the second end (103). At least one sensor (300) is connected to the second end (103) of the first optical fiber (100) to transform the captured light to an electrical signal input into an image processor (400) connected to the sensor (300). The image processor (400) is configured to convert the signal from the sensor (300) into an image which is visible and recognizable by the human eye. The sensor (300) is preferrably an image sensor or photo sensor, located in the vehicle (10), e.g. in the (exterior and/or interior) rear view mirrows or in any other place of the vehicle (10). Finally and optionally, the image can be further processed to have some characteristics modified (e.g., brightness, stitching, etc., overlays...).
In a preferred embodiment, the imaging device uses a single multi-mode optical fiber
(100) to bring light into the sensor (300) and the light is decoded by the image processor (400) connected to the sensor (300). The light captured by the first end
(101 ) of the fiber (100) reaches the second end (103) at which the sensor (300) is placed. The resolution of the images depends solely on the wavelength of the light captured by the first end (101 ). In order to obtain images from the exterior (201 ) of the motor vehicle (10), it is needed that the first end (101 ) of the fiber (100) points to the exterior (201 ). Generally, the placement of this first end (101 ) of the fiber (100) can be in the exterior (201 ) of the motor vehicle (10) or inside (202) the motor vehicle (10) but always pointing to an open space (exterior or interior of the vehicle (10)). That is, in a possible implementation, the first end (101 ) of the fiber (100) can still point to the interior (202) of the vehicle (10) (e.g., pointing to the driver or passengers).
The sensor (300) converts incident light to a digital signal and said digital signal may be capable of direct feed into the image processor (400). Thus, the present invention can only work with a single optical fiber, either "multi-mode" or not, connected to a single image sensor (300). Additionally, this digital signal coming from the image sensor (300) can be saved or directly sent to the image processor (400) that applies machine learning or interferometry to transform light (in digital signal form) in an image understandable by the human being. Training must be done at the production / assembly plant before using the car.
Anyway, either by "machine learning" or by interferometry, the resolution of the final image is a function of the wavelength of the light captured by the first end (101 ) of the fiber (100). In a preferred example, the resolution is proportional to of the wavelenth squared l2. In the case that "interferometry" is used, the resolution could be proportional to r~16/p(NA/l2), where NA is the numerical aperture of the fiber and l the wavelength of the light.
Figure 1 shows the image sensor (300) connected to the image processor (400) and, according to a possible embodiment of the invention, the final image obtained by the image processor (400) goes directly to a display (500). Another possible embodiment of the invention is connecting the image processor (400) to a machine vision module, for example implemented in an Electronic Control Unit or ECU (600), which runs a machine learning application; e.g., in an autonomous car, the machine vision module allows the system to analyse the video without displaying it. There is a machine learning and a machine vision in motor vehicles. The "machine learning", or in its absence the "interferometry", is done in the production line and serves to train the system, i.e., the image processor (400) to decode the light captured by the optical fiber (100). The machine vision can be used to recognize vehicles, calculate distances, see lanes, etc. Thus, the "machine learning" is carried out in the production / assembly line to convert light into image, while the "machine vision" is used during the driving of the car and serves to recognize vehicles, calculate distances, etc. The Image Processor (400) performs interferometry / machine learning (reconstruction) based on the training received in the production / assembly line of the vehicle (10). The ECU (600) or“master ECU” consists of a controller that performs different operations such as image stitching (e.g., for top-view), overlays (e.g., guidelines for parking), the machine vision module (recognition of objects in the image, such as Line Recognition (LR), but even traffic signs, pedestrian crossing, Blind Spot Detection, etc.).
The machine vision module comprises a Line Recognition (LR) module. In addition, the machine vision presents a module for recognizing traffic signals.
The electronic imaging device can comprise one or more image processors (400) and different modules (e.g., the "vision module") can be in the same processor or in different processors. The Image Processor (400) is any controller that processes the image and the modules can be the following: a decoding module configured to obtain the preprocessed image from the electric signal of the image sensor (300), post processing modules such as machine vision which reprocess the image obtained by the decoding module. Thus, the image obtained by the decoding module can be then directly displayed or a post-processed image, in which information has been extracted or added to said image by post-processing modules, is the one finally displayed.
The multi-mode fiber (100) used by the proposed imaging device includes a core (102) having features which can be selected from an outer diameter with a size-range of 15 micrometers to several millimeters and different shapes: circular, squared, hexagonal, etc. According to some possible embodiments, some examples for preferred size- ranges of the multi-mode fiber (100) to be used are: 15 micrometers - 2000 micrometers, 15 micrometers - 1040 micrometers, 15 micrometers - 600 micrometers and 15 micrometers - 250 micrometers. Furthermore, the ends (101 , 103) of the fiber (100) can be plane or they can have another shape depending of the requirements. Moreover, according to the materials of the fiber (100), as long as these materials accomplish the requirement of refractive index to works as a fiber, the acceptance cone of the multi-mode fiber can change to be adjusted to the actual requirements of the use case. The acceptance cone determines the minimum angle with which the light can enter into the fiber (100). The half angle is the acceptance angle and it depends only on the refractive index of the refraction of the fiber (100) - core and cladding- and the medium. With a big acceptance angle is possible to obtain images with a bigger field of view at the first end (101 ) of the fiber (100) and it reduces the number of fibers or cameras to obtain images.
The imaging device can work using just one single multi-mode fiber (100) but, according to another embodiment, multiple fibers (1 10, 100) can be used to have a complete view of the exterior (201 ) and so improves the driver’s visibility, allowing the driver to have a field of view or FOV (200) selected from a top view (also known as bird’s eye) or a surround view (SVS: Surround View System) of the exterior (201 ) and to help him/her for parking the vehicle (10). Furthermore, this imaging device allows the implementation of a“transparent vehicle”, where the user can see in all directions, including below or over the vehicle, with no blind spots.
According to a possible embodiment, shown in Figure 2, the image sensor (300), whose digital ouput in turn is connected to the image processor (400), is connected to two (or more) optical fibers (100, 1 10). The image sensor (300) is then configured to receive light from several sources, i.e., the light (1 100, 1 1 10) from different optical fibers (100, 1 10), which may or may not have different fields of views (200) with each other. However, if there are several optical fibers, the equivalent resolution for each image from each optical fiber (100, 1 10) is reduced proportionally. Therefore, an image with the maximum resolution is to be obtained, it is better to receive light from a single optical fiber.
Figure 3 shows another example of implementation with more than one fiber wherein each optical fiber (100, 1 10) uses the entire image sensor (300) to increase/decrease the size of light (2100, 21 10) received by the image sensor (300). According to an example of embodiment, when several optical fibers (100, 1 10) are physically connected through the image sensor (300) to the same image processor (400), the light (2100, 21 10) from the different optical fibers (100, 1 10) respectively is enabled / disabled to be input into the image sensor (300). The image sensor (300) only receives light from a first optical fiber (100) equivalent to a frame, e.g., frame_0. At the next frame, frame_1 , the image sensor (300) stops receiving light (2100) from the first optical fiber (100) and allows the light input (21 10) from the second optical fiber (1 10). At the subsequent frame, frame_2, the image sensor (300) stops receiving light (21 10) from the second optical fiber (1 10) and allows the light (2100) input (again) from the first optical fiber (100). And so on. That is, the light input (2100, 21 10) of the different optical fibers (100, 1 10) is alternating. According to an example, the image sensor (300) can operate at 60 frames per second. Therefore, if there are two optical fibers (100, 1 10), each "field of view" comes out at 30 frames per second. The enabling/disabling of the light passing can be performed by an optical shutter, i.e., a diaphragm. The shutters can be mechanical or optoelectronic that are faster than the mechanic ones. The light (21 10) of the first optical fiber (100) can be decoded according to a first machine learning, while the light (21 10) of the second optical fiber (1 10) can be decoded according to a second machine learning (or interferometry).
If a plurality of fibers is used, a different machine learning (or interferometry) algorithm is applied for each fiber. For example, if there are four optical fibers, four different machines learning algorithms are needed, one for each fiber. A control unit can synchronize the optical shutter and the image processor, so that the appropriate machine learning can be used for its corresponding fiber. In the examples shown by Figures 2-3, the first optical fiber (100) is associated with a first machine learning algorithm applied to that fiber, while the second optical fiber (1 10) is associated with a second machine learning algorithm. Therefore, in Figure 2, a determined first area or portion of the image sensor (300) corresponds to the light (1 100) received by the first optical fiber (100), while a second area/portion of the image sensor (300) corresponds to the light (1 1 10) from the second optical fiber (1 10). Therefore, the image sensor (300) is divided into a plurality of areas/portions and there are as many portions as optical fibers, and in addition each optical fiber is associated with its area (portion) of the sensor image, and also each zone (portion) is associated with a different machine learning (or interferometry), having each fiber associated with its machine learning (or interferometry). In Figure 3, a control unit is needed to alternate the machine learning when there are two or more optical fibers, as the control unit does not only change the "enabling / disabling" through an optical shutter, but also applies one or another machine learning (or interferometry) according to the optical fiber that is activated. That is, the optical shutter activates the optical fiber but also applies the machine learning (or interferometry) associated with that optical fiber. Said control unit can be independent or be implemented in the image processor (400), or in the "Master ECU" (600). According to other possible embodiments, the imaging device comprises a first multi- mode fiber (100) with a first sensor and a second multi-mode optical fiber (1 10), which in turn may be connected to: the first sensor of the first multi-mode fiber, or to a stacked device composed by another sensor and an image processor, or to a second sensor being both (aforementioned first and second) sensors connected to the same image processor (400).
The (first and/or second) image sensor (300) can be located in the same housing of image processor (400), e.g., near the motor of the vehicle (10), or at a separated ubication from the one of the image processor (400) to which said image sensor (300) is connected. For example, in a possible embodiment, there are two image sensors, each one ubicated in one of the two rear view mirrors of the vehicle, and both sensors can be connected to the master ECU (600) of the vehicle (10).
According to another example of implementation, a plurality of (e.g., 16) multi-mode fibers are used to acquire the light (not image) from the exterior (201 ) of the vehicle (10).
For example, in a possible embodiment, there are several optical fibers in the vehicle (10), connected to the same image sensor (e.g. located behind the dashboard, that is, in the motor of the vehicle (10)); the image sensor (300) connected, in turn, to an image processor (400), e.g., an ISP, and all the optical fibers focusing on points (3000, 3001 , 3002, 3003, 3004, 3005, 3006, 3007, 3008) to obtain a FOV of 360°, as shown in Figure 4. There can be more than one image sensor (300), each one connected to the corresponding image processor (400) and, in turn, connected to the master ECU (600) of the vehicle (10).
The whole UltraViolet/Visible/lnfraRed range of light can be covered just by changing the optical fiber and/or the sensor used at the second end of the fiber. In a possible embodiment, one single fiber and only one sensor detect these wavelengths and convert this information into electrical signals, which are not images recognizable by the human eye. Another possible embodiment is to have one single fiber and several sensors at its end, each sensor configured to detect one particular range (Ultraviolet, Visible or InfraRed) of wavelengths. Another further possible embodiment is to have several fibers connected to several sensors, one fiber for each range of wavelengths. Current camera lenses have different focal points at different wavelengths, making it difficult to obtain images in the same range. Besides, current cameras have limitations to focus (hyperfocal) but the proposed imaging device can obtain images at different distances without the need to focus, which allows third dimensional, 3D, images. Distal ends of the several multi-mode optical fibers can be lined up to different directions. This means that an object can be viewed from various directions. Likewise, various objects (e.g, obstacles or vehices in the exterior environment of the vehicle (10)) can be observed at the same time in all directions, for example, in stereo vision.
The image processor (400) may be a CPU (Central Processing Unit), a data processor or an ISP (Image Signal Processor). The image processor (400) performs at least the decoding of the signal from the optical sensor into images. For application in motor vehicles, the image processor (400) of the imaging device can be connected to the ECU (600) of the vehicle (10) which, among other functions, makes a "surrounding view" or a "top-view" in a display for the driver of the vehicle (10). In a further possible embodiment, the sensor (300) (e.g. located behind the dashboard of the vehicle (10)) can be connected directly to the ECU (600), as shown in Figure 5, which performs the top-view for the driver.
As described before, the image processor (400) can be connected to a sensor (300) or be stacked with the sensor (300) in the same receiving device as shown in Figure 6. One image processor (400) can be connected to one or more sensors. Image processor (400) can be placed inside (202) the motor vehicle (10) or in any exterior (201 ) part of the vehicle (10) like the rear view mirror. In one example, the multi-mode optical fiber (100) may be bent as desired, whereby the fiber (100) can occupy any position and can monitor one or more objects any thereby. This allows easier integration in the vehicle (10).
Optionally, according to an example of implementation, there may be storing means (e.g., a memory) connected to an output of the image sensor (300) to store data extracted from the signal received by the image sensor (300). This could be useful for accident monitoring. The signal that comes from the image sensor (300) is saved without being processed by the image processor (400). In the case of accidents, it is possible a posteriori to "rescue" light from the image, by applying the processing and convertion of the image, as explained. In an embodiment, the storing means can receive the image after reconstruction from the image processor (400).
The image generation is centralized in the motor vehicle (10). In one embodiment, the sensor (300) and the image processor (400) are both located in the front part of the vehicle (10), near the engine. In another embodiment, the sensor (300) and the image processor (400) are located in one door of the vehicle (10); more particularly, the sensor (300) and the image processor (400) can be located in the exterior rear view mirror. In another example, the sensor (300) and the image processor (400) can be separated and placed at different locations or they both can be staked in the same device located in the motor vehicle (10).
Figure 7 shows another example of possible locations for the sensor (300) and the image processor (400) in a rear view mirror (70) located at the exterior of the vehicle (10) having an image sensor (300) connected to at least one optical fiber, e.g., a first optical fiber (100) and a second optical fiber (1 10), and a control unit acting as an image processor (400). In a possible example in which the rear-view mirror (70) has two optical fibers, as illustrated in Figure 7, the first optical fiber (100) can be focused substantially rearward of the vehicle (10) acting as a CMS and second optical fiber (1 10) can be focused sensibly downwards for a top-view. According to another example, the storing means configured either to save data before pre-processing or after image processing can be located in the rear view mirror (70) too. According to another example, all the the rear view mirrors (70) at the exterior of the vehicle (10) are connected, in such a way that the control unit of at least one mirror can do a "stitching", etc. According to a further example, at least one exterior mirror (70), preferably the two ones of the vehicle (10) can be connected also to the interior mirror. These options of implementation can be applied to the so-called winglets of motor vehicles. A winglet of a motor vehicle (10) is an exterior“rear-view mirror” that do not have mirrors but a camera instead. Some existing mirrors / winglets carry two cameras, one focused on the ground for the top-view and one sensibly backwards for the CMS. The proposed embodiments of the invention replace both cameras by optical fibers. In addition and optionally all the mirrors of the vehicle (10), interior and / or exterior rear-view mirrors, but not the winglet, can have a display, In addition and optionally, these rear-view mirrors (in this case, not the winglet) comprise electro- optical means to switch from“mirror mode” to "display mode".
The image processor (400) may comprise an algorithm that includes an interferometric characterization of the fiber allowing image reconstruction. This means that one multi-mode optical fiber (100) may form a complete image (not just one pixel). Note that said characterization of the fiber is performed just once during the system lifetime, after which the image can be formed without any further interferometric fiber characterization.
There are several approaches to decode the image by the image processor (400) and achieve image reconstruction from the light coming through a multimodal optical fiber (100). One possible method is using a transmission matrix but there are others such as machine learning. All of them are based on the same idea, the calibration of the setup, i.e., the optical fiber position and orientation, before obtaining any recognizable image. The ways to do so may include deep neural networks, transmission matrix inversion, linear optimization, etc.
One method for image reconstruction is applying the transmission matrix obtained once with interferometric characterization to the speckle image from the end of the multi-mode fiber (100). A setup composed by a Ne-He laser, two beam splitters, a galvanometric mirror and a multimodal fiber allows obtaining the transmission matrix. The image reconstruction is possible.
In another example, the image processor (400) uses a trained machine learning to convert the light from the multi-mode optical fiber (100) into an image.
In some examples, light passing through the multi-mode optical fiber (100) may have various forms or patterns formed therein.
In another possible embodiment, both ends (101 , 103) of the fiber (100) can include a lens to transform naturally diverging light-emission from an optical fiber to a parallel beam of light. There are also methods to refocus the fiber (100) to compensate possible perturbations (thermal, mechanical and so on), to increase the resolution limit, optical fiber length, etc. In conclusion, the choice of the image reconstruction method strongly depends on the needs of the use case in terms of thermal and mechanical stability, resolution, frame rate and so on.
Note that in this text, the term“comprises” and its derivations (such as“comprising”, etc.) should not be understood in an excluding sense, that is, these terms should not be interpreted as excluding the possibility that what is described and defined may include further elements, steps, etc.

Claims

1. An electronic imaging device for motor vehicles (10), comprising:
at least one first optical fiber (100) which comprises a first end (101 ), a core (102) and a second end (103), to carry light from the first end (101 ) to the second end (103) guided through the core (102);
the imaging device being characterized by further comprising:
at least one sensor (300), wherein the at least one first optical fiber (100) and sensor (300) are installed in a motor vehicle (10), and wherein
the first end (101 ) of the first optical fiber (100) points to an open space comprising the exterior (201 ) and / or the interior (202) of the motor vehicle (10) to capture light from the open space, and
the second end (103) of the first optical fiber (100) is connected to the at least one sensor (300) to receive the captured light, the at least one sensor (300) converting the captured light into an electric signal; and
at least an image processor (400) configured to convert the electric signal from the at least one sensor (300) into an image, which is recognizable by the human eye, by using an image reconstruction algorithm selected from trained machine learning and interferometry.
2. The imaging device according to claim 1 , further comprising at least one second optical fiber (110).
3. The imaging device according to claim 2, wherein the second optical fiber (1 10) is connected to the sensor (300) of the first optical fiber (100).
4. The imaging device according to claim 2, wherein the second optical fiber (1 10) is connected to a second sensor, the second sensor and the sensor (300) of the first optical fiber being connected to an image processor (400).
5. The imaging device according to claim 2, wherein the second optical fiber (1 10) is connected to a stacked device comprising a second sensor and an image processor (400).
6. The imaging device according to any of claims 2-5, wherein the two respective first ends of the first optical fiber and the second optical fiber point both to the exterior (201 ) of the vehicle (10) covering a field of view (200) of 360 degrees.
7. The imaging device according to any preceding claim, wherein the image processor (400) is placed in the vehicle (10) at a location different from the location of the sensor (300) to which is connected.
8. The imaging device according to any of claims 1-7, wherein the image processor (400) is stacked with the sensor (400) in a same housing placed in the vehicle (10).
9. The imaging device according to any preceding claim, wherein the image processor (400) is placed inside the vehicle (10).
10. The imaging device according to anyof claims 1 -9, wherein the image processor (400) is placed in the exterior of the vehicle (10).
1 1. The imaging device according to claim 10, wherein the image processor (400) is placed in the rear view mirror (70) of the vehicle (10).
12. The imaging device according to claim 1 1 , wherein the sensor (300), the first optical fiber (100) and a second optical fiber (110) are placed in the rear view mirror (70) of the vehicle (10), the first optical fiber (100) being pointed rearward of the vehicle (10) and the second optical fiber (110) being pointed downwards, the second optical fiber (110) being connected to the sensor (300) of the first optical fiber (100).
13. The imaging device according to any preceding claim, wherein the image processor (400) is selected from a central processing unit, a data processor and an image signal processor.
14. The imaging device according to any preceding claim, wherein the image processor (400) is connected to an electronic control unit, ECU (600), of the vehicle (10).
15. The imaging device according to any preceding claim, wherein the image processor (400) is connected to a display (500) of the vehicle (10).
16. The imaging device according to any preceding claim, wherein the first end (101 ) of the at least one first optical fiber (100) is plane.
17. The imaging device according to any preceding claim, wherein the at least one first optical fiber (100) is bent.
18. The imaging device according to any preceding claim, wherein the at least one first optical fiber (100) is multi-mode.
19. The imaging device according to claim 18, wherein the optical fiber (100) has a substantially circular cross-section within the size-range of 15 micrometers to 250 micrometers.
20. The imaging device according to claim 18, wherein the optical fiber (100) has a substantially circular cross-section within the size-range of 15 micrometers to 600 micrometers.
21. The imaging device according to claim 18, wherein the optical fiber (100) has a substantially circular cross-section within the size-range of 15 micrometers to 600 micrometers.
22. The imaging device according to claim 18, wherein the optical fiber (100) has a substantially circular cross-section within the size-range of 15 micrometers to 600 micrometers.
23. The imaging device according to any of claims 2-18, wherein the at least one first optical fiber (100) and the at least one second optical fiber (1 10) are multi- mode.
24. The imaging device according to any preceding claim, wherein the sensor (300) is an image sensor.
25. The imaging device according to any of claims 3-24, wherein the sensor (300) is split into two portions, a first portion to receive the light (1100) from the first optical fiber (100) and a second portion to receive light (11 10) from the at least one second optical fiber (110)
26. The imaging device according to any of claims 3-24, wherein the sensor (300) uses its entire surface to receive simultaneoulsy the light (2100) from the at least one first optical fiber (100) and the light (21 10) from the at least one second optical fiber (1 10).
27. The imaging device according to any of claims 25-26, further comprising an optical shutter to enabling and disabling the passing of the light (1100, 2100) from the first optical fiber (100) and the light (1 110, 2110) of the second optical fiber (110) alternatively to the sensor (300).
28. The imaging device according to claim 27, wherein the optical shutter is optoelectronic.
29. The imaging device according to claim 27, wherein the optical shutter is mechanical.
30. The imaging device according to any of claims 25-29, wherein the light (1100, 2100) of the first optical fiber (100) is decoded according using a first image reconstruction algorithm and the light (110, 21 10) of the second optical fiber (1 10) is decoded using a second image reconstruction algorithm.
31. The imaging device according to any preceding claim, further comprising storing means connected the sensor (300) to store data extracted from the light received by the sensor (300).
32. The imaging device according to any preceding claim, further comprising storing means connected the image processor (400) to store data extracted from the image received by the image processor (400).
PCT/EP2019/055029 2019-02-28 2019-02-28 Imaging device suitable for use in a motor vehicle WO2020173570A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2019/055029 WO2020173570A1 (en) 2019-02-28 2019-02-28 Imaging device suitable for use in a motor vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2019/055029 WO2020173570A1 (en) 2019-02-28 2019-02-28 Imaging device suitable for use in a motor vehicle

Publications (1)

Publication Number Publication Date
WO2020173570A1 true WO2020173570A1 (en) 2020-09-03

Family

ID=65657458

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2019/055029 WO2020173570A1 (en) 2019-02-28 2019-02-28 Imaging device suitable for use in a motor vehicle

Country Status (1)

Country Link
WO (1) WO2020173570A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01247235A (en) * 1988-03-29 1989-10-03 Katsuji Okino Device for simultaneously recognizing visual information on right left and rear sides
US5524155A (en) * 1995-01-06 1996-06-04 Texas Instruments Incorporated Demultiplexer for wavelength-multiplexed optical signal
EP1273516A1 (en) 2001-07-06 2003-01-08 Audi Ag Device for the acquisition of optical information
US20100110259A1 (en) * 2008-10-31 2010-05-06 Weistech Technology Co., Ltd Multi-lens image sensor module
US20120194719A1 (en) * 2011-02-01 2012-08-02 Scott Churchwell Image sensor units with stacked image sensors and image processors
WO2016180874A1 (en) * 2015-05-12 2016-11-17 Connaught Electronics Ltd. Camera for a motor vehicle with at least two optical fibers and an optical filter element, driver assistance system as well as motor vehicle

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH01247235A (en) * 1988-03-29 1989-10-03 Katsuji Okino Device for simultaneously recognizing visual information on right left and rear sides
US5524155A (en) * 1995-01-06 1996-06-04 Texas Instruments Incorporated Demultiplexer for wavelength-multiplexed optical signal
EP1273516A1 (en) 2001-07-06 2003-01-08 Audi Ag Device for the acquisition of optical information
US20100110259A1 (en) * 2008-10-31 2010-05-06 Weistech Technology Co., Ltd Multi-lens image sensor module
US20120194719A1 (en) * 2011-02-01 2012-08-02 Scott Churchwell Image sensor units with stacked image sensors and image processors
WO2016180874A1 (en) * 2015-05-12 2016-11-17 Connaught Electronics Ltd. Camera for a motor vehicle with at least two optical fibers and an optical filter element, driver assistance system as well as motor vehicle

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
BABAK RAHMANI ET AL: "Multimode Optical Fiber Transmission with a Deep Learning Network", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 14 May 2018 (2018-05-14), XP081425545 *
GLENN ELERT: "Diameter of an Optical Fiber - The Physics Factbook", 1 January 1997 (1997-01-01), XP055633445, Retrieved from the Internet <URL:https://hypertextbook.com/facts/1997/LaurenBoyd.shtml> [retrieved on 20191017] *

Similar Documents

Publication Publication Date Title
CN111149040B (en) Display device, in particular for a vehicle, and vehicle with such a display device
KR102499586B1 (en) imaging device
US10908417B2 (en) Vehicle vision system with virtual retinal display
CN104076514B (en) A kind of automobile information display method and device
EP1961613B1 (en) Driving support method and driving support device
US10960829B2 (en) Movable carrier auxiliary system and control method thereof
KR20180035895A (en) INFORMATION DISPLAY DEVICE, INFORMATION PROVIDING SYSTEM, MOBILE DEVICE, INFORMATION DISPLAY METHOD, AND RECORDING MEDIUM
US20120033079A1 (en) Camera for a vehicle
JP2005126068A (en) Adaptively imaging night vision device
KR20200014757A (en) Imaging Devices, Solid State Sensors, and Electronic Devices
JP2018200423A (en) Imaging device and electronic apparatus
CN110073652B (en) Image forming apparatus and method of controlling the same
IL128713A (en) Viewing apparatus
CN105450947A (en) Vehicle optical sensor system
JP2018022953A (en) On-vehicle camera, on-vehicle camera apparatus, and support method for on-vehicle camera
TW201903462A (en) Hand vibration correction device and camera device
TW202027031A (en) Action vehicle auxiliary system
CN104618665B (en) Multiple imager vehicle optical sensor system
US10486600B1 (en) Systems for improving side-mirror functionality of a vehicle
JP6550690B2 (en) Display device, vehicle
KR101179326B1 (en) Vehicle rear monitering apparatus
WO2020173570A1 (en) Imaging device suitable for use in a motor vehicle
WO2021220840A1 (en) Optical device
WO2021193085A1 (en) Lens optical system, light receiving device, and distance measuring system
KR101679017B1 (en) Head-up display apparatus for vehicle and control method thereof

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19708825

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19708825

Country of ref document: EP

Kind code of ref document: A1