US20210120158A1 - Cameras with scanning optical path folding elements for automotive or surveillance applications - Google Patents

Cameras with scanning optical path folding elements for automotive or surveillance applications Download PDF

Info

Publication number
US20210120158A1
US20210120158A1 US16/978,690 US201916978690A US2021120158A1 US 20210120158 A1 US20210120158 A1 US 20210120158A1 US 201916978690 A US201916978690 A US 201916978690A US 2021120158 A1 US2021120158 A1 US 2021120158A1
Authority
US
United States
Prior art keywords
tele
fov
camera
wide
ooi
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US16/978,690
Other languages
English (en)
Inventor
Gal Shabtay
Eran Briman
Roy Fridman
Noy Cohen
Ephraim Goldenberg
Gil BACHAR
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Corephotonics Ltd
Original Assignee
Corephotonics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Corephotonics Ltd filed Critical Corephotonics Ltd
Priority to US16/978,690 priority Critical patent/US20210120158A1/en
Publication of US20210120158A1 publication Critical patent/US20210120158A1/en
Assigned to COREPHOTONICS LTD. reassignment COREPHOTONICS LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BACHAR, Gil, BRIMAN, ERAN, COHEN, NOY, FRIDMAN, Roy, GOLDENBERG, EPHRAIM, SHABTAY, GAL
Pending legal-status Critical Current

Links

Images

Classifications

    • H04N5/2259
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N3/00Scanning details of television systems; Combination thereof with generation of supply voltages
    • H04N3/02Scanning details of television systems; Combination thereof with generation of supply voltages by optical-mechanical means only
    • H04N3/08Scanning details of television systems; Combination thereof with generation of supply voltages by optical-mechanical means only having a moving reflector
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R22/00Safety belts or body harnesses in vehicles
    • B60R22/48Control systems, alarms, or interlock systems, for the correct application of the belt or harness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06K9/00228
    • G06K9/00362
    • G06K9/00771
    • G06K9/00798
    • G06K9/00845
    • G06K9/209
    • G06K9/6288
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/58Means for changing the camera field of view without moving the camera body, e.g. nutating or panning of optics or image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • H04N5/23219
    • H04N5/23238
    • H04N5/23296
    • H04N5/247
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R22/00Safety belts or body harnesses in vehicles
    • B60R22/48Control systems, alarms, or interlock systems, for the correct application of the belt or harness
    • B60R2022/4808Sensing means arrangements therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Definitions

  • Embodiments disclosed herein relate in general to digital cameras and in particular to thin zoom digital cameras.
  • Dual-camera or “dual-aperture camera”
  • the two cameras have lenses with different focal lengths and have respective image sensors operated simultaneously to capture an image. Even though each lens/sensor combination is aligned to look in the same direction, each will capture an image of the same scene but with a different field of view (FOV).
  • FOV field of view
  • image sensor is referred to henceforth as “sensor”.
  • Dual-aperture zoom cameras in which one camera has a “Wide” FOV (FOV W ) and the other has a narrow or “Tele” FOV (FOV T ) are also known, see e.g. U.S. Pat. No. 9,185,291.
  • the cameras are referred to respectively as Wide and Tele cameras that include respective Wide and Tele sensors. These sensors provide respectively separate Wide and Tele images.
  • the Wide image captures FOV W and has a lower spatial resolution than the spatial resolution of the Tele image that captures FOV T .
  • the images may be merged (fused) together to form a composite image.
  • the central portion is formed by combining the relatively higher spatial resolution image taken by the lens/sensor combination with the longer focal length
  • the peripheral portion is formed by a peripheral portion of the relatively lower spatial resolution image taken by the lens/sensor combination with the shorter focal length.
  • Dual-aperture cameras in which one image (normally the Tele image) is obtained through a folded optical path are known, see e.g. co-invented and co-owned U.S. patent application Ser. No. 14/455,906, which teaches zoom digital cameras comprising an “upright” (with a direct optical axis to an object or scene) Wide camera and a “folded” Tele camera, see also FIG. 2B below.
  • the folded camera has an optical axis substantially perpendicular (orthogonal) to an optical axis of the upright camera.
  • the folded Tele camera may be auto-focused and optically stabilized by moving either its lens or by tilting an optical path folding (reflecting) element (“OPFE”), e.g.
  • OPFE optical path folding
  • OPFE optical path folding (reflecting) element that can perform the function of folding an optical path as described herein, for example a mirror.
  • PCT patent application PCT/IB2016/056060 titled “Dual-aperture zoom digital camera user interface” discloses a user interface for operating a dual-aperture digital camera included in host device, the dual-aperture digital camera including a Wide camera and a Tele camera, the user interface comprising a screen configured to display at least one icon and an image of a scene acquired with at least one of the Tele and Wide cameras, a visible frame defining FOV T superposed on a Wide image defined by FOV W , and means to switch the screen from displaying the Wide image to displaying the Tele image.
  • the user interface further comprises means to switch the screen from displaying the Tele image to displaying the Wide image.
  • the user interface may further comprise means to acquire the Tele image, means to store and display the acquired Tele image, means to acquire simultaneously the Wide image and the Tele image, means to store and display separately the Wide and Tele images, a focus indicator for the Tele image and a focus indicator for the Wide image.
  • Object recognition is known and describes the task of finding and identifying objects in an image or video sequence.
  • Many approaches have been implemented for accomplishing this task in computer vision systems. Such approaches may rely on appearance-based methods by using example images under varying conditions and large model-bases, and/or on feature-based methods that search to find feasible matches between object features and image features, e.g., by using surface patches, corners and edges detection and matching.
  • Recognized objects may be tracked in preview or video feeds using an algorithm for analyzing sequential frames and outputting the movement of targets between the frames.
  • the problem of motion-based object tracking may be divided into two parts:
  • detecting moving objects in each frame This may be done either by incorporating an object recognition algorithm for recognizing and tracking specific objects (e.g. a human face) or, for example, by detecting any moving object in a scene.
  • the latter may incorporate a background subtraction algorithm based on Gaussian mixture models with morphological operations applied to the resulting foreground mask to eliminate noise. Blob analysis can later detect groups of connected pixels, which are likely to correspond to moving objects; and
  • systems comprising dual-aperture zoom digital cameras with scanning OPFEs for automotive or surveillance applications and methods for operating and using same.
  • systems comprising: a Wide camera with a Wide field of view FOV W and comprising a Wide sensor and a Wide lens, wherein the Wide camera is operative to output Wide image information; a Tele camera with a Tele field of view FOV T smaller than FOV W and comprising a Tele sensor, a Tele lens with a Tele lens optical axis and a scanning OPFE; and a processing unit operative to detect an object of interest (OOI) from Wide and/or Tele image information and to direct the Tele camera to move FOV T to acquire Tele image information on the OOI.
  • OOI object of interest
  • the system is installed in a vehicle and the processing unit is further operative to calculate a required measure-of-action or response needed from the vehicle.
  • a system further comprises an actuator to tilt the OPFE to move the FOV T .
  • the processing unit is operative to direct the Tele camera to move FOV T to substantially a center of the FOV W .
  • the processing unit is operative to direct the Tele camera to move FOV T to a center of the OOI.
  • the processing unit is operative to receive steering information from a steering wheel of the vehicle and to direct the Tele camera to move FOV T also based on the steering information.
  • the processing unit is operative to receive steering information from a steering wheel of the vehicle and the actuator tilts the OPFE to move FOV T also based on the steering information.
  • FOV W covers a road in front of the vehicle
  • the OOI is a road curve
  • the processing unit is operative to move FOV T to follow the road curve.
  • the vehicle has a vehicle cabin, the OOI is located inside the vehicle cabin and the OPFE may be tilted to provide an extended Tele camera FOV (FOV E ) greater than FOV T .
  • FOV E Tele camera FOV
  • the OOI is a driver of the vehicle and the required measure-of-action or response is based on a gaze of the driver.
  • the OOI is a child and the required measure-of-action or response is a warning that the child does not wear a seat belt.
  • the required measure-of-action or response includes a measure-of-action or response selected form the group consisting of changing speed and/or course of the vehicle, operating an internal alarm to a driver of the vehicle, operating an external alarm, sending data information to, or calling Internet/cloud based service/police/road assistance services, and a combination thereof.
  • the OOI is a human face.
  • the processing unit is operative to instruct the Tele camera to move to a specific location of the human face for face recognition.
  • the processing unit is operative to instruct the Tele camera to move FOV T to scan parts of FOV W in two directions.
  • the scan is performed by the scanning OPFE with a tilting and settling time of the OPFE of between 5-50 msec.
  • the processing unit is operative to detect the OOI from Wide and/or Tele image information and to direct the Tele camera to move FOV T to acquire information on the OOI in automatic tracking mode.
  • the Wide and Tele image information may be fused together to form a composite image or a composite video stream.
  • each composite image has the same field of view.
  • a composite image is formed by stitching a plurality of Tele images.
  • FIG. 1A shows an embodiment of a system disclosed herein
  • FIG. 1B shows an example of elements of a dual-camera in a perspective view
  • FIG. 2 shows schematically a use case of the system of FIG. 1A ;
  • FIG. 3 shows a flow chart of a method in the use case of FIG. 2 ;
  • FIG. 4A shows another embodiment of a system disclosed herein
  • FIG. 4B shows yet another embodiment of a system disclosed herein
  • FIG. 5 shows schematically a use case of the systems of FIG. 4A or 4B ;
  • FIG. 6A shows schematically a method of use of the systems in FIG. 4A or 4B ;
  • FIG. 6B shows schematically another method of use of the systems in FIG. 4A or 4B ;
  • FIG. 6C shows schematically yet another method of use of the systems in FIG. 4A or 4B ;
  • FIG. 7A shows yet another embodiment of a system disclosed herein
  • FIG. 7B shows yet another embodiment of a system disclosed herein
  • FIG. 8 shows a vehicle cabin section and use case of a system in FIGS. 7A and 7B ;
  • FIG. 9A shows schematically a method of use of the systems in FIG. 7A ;
  • FIG. 9B shows schematically another method of use of the systems in FIG. 7B ;
  • FIG. 10A shows yet another embodiment of a system disclosed herein
  • FIG. 10B shows the resolution of an image obtained with known digital zoom
  • FIG. 10C shows the resolution of an image obtained with “optical” zoom using the system of FIG. 10A .
  • FIG. 1A shows an embodiment of a system disclosed herein and numbered 100 .
  • System 100 may be installed in, or attached to a vehicle 102 .
  • System 100 includes a Tele camera 104 , a Wide camera 106 and a processing unit (“processor”) 108 .
  • the vehicle may be for example a car, a bus, a truck, a motorcycle, a coach or any type of know vehicle.
  • Processing unit 108 may be a CPU, GPU, ASIC, FPGA, or any other processor capable of graphic analysis.
  • a system like system 100 may also be referred to as “advanced driver assistant system” or ADAS.
  • ADAS advanced driver assistant system
  • FIG. 1B shows an example of elements of a dual-camera 110 in a perspective view.
  • Wide camera 106 comprises a Wide sensor 132 and a Wide lens 134 with a Wide lens optical axis 136 .
  • Wide sensor 132 is characterized by a Wide sensor active area size and a Wide sensor pixel size.
  • Wide lens 134 is characterized by a Wide effective focal length (EFL) marked EFL W .
  • EFL W Wide effective focal length
  • Wide lens 134 may have a fixed (constant) EFL W .
  • the Wide lens may be fixed at a constant distance from Wide image sensor 132 (fixed focus).
  • Wide lens 134 may be coupled to a focusing mechanism (e.g. an autofocus (AF) mechanism) that can change the distance of Wide lens 134 from Wide image sensor 132 , thereby providing a non-fixed (variable) focus).
  • AF autofocus
  • the combination of Wide sensor area and EFL W determines the Wide FOV (FOV W ).
  • FOV W may be 50-100 degrees in the horizontal vehicle-facing direction.
  • Tele camera 104 comprises a Tele sensor 122 and a Tele lens 124 with a Tele lens optical axis 138 .
  • Tele sensor 122 is characterized by a Tele sensor active area size and a Tele sensor pixel size.
  • Tele lens 124 is characterized by a Tele EFL, marked EFT T .
  • Tele lens 124 may have fixed (constant) EFL.
  • the Tele lens may be fixed at a constant distance from Tele image sensor 122 (fixed focus).
  • the Tele lens may be coupled to a focusing mechanism (e.g. an AF mechanism) that can change the distance of Tele lens 124 from Tele image sensor 122 (non-fixed focus).
  • a focusing mechanism e.g. an AF mechanism
  • Tele FOV T The combination of Tele sensor area and Tele lens EFL T determines the Tele FOV (FOV T ).
  • FOV T may be between 10-30 degrees in the horizontal vehicle-facing direction.
  • FOV T is smaller (narrower) than FOV W .
  • Tele camera 104 further comprises an OPFE 126 , e.g. a mirror or a prism.
  • OPFE 126 has a reflection surface tilted by 45 degrees at a rest point from the Tele lens optical axis 138 .
  • Tele camera 104 further comprises an actuator (motor) 128 .
  • Actuator 128 may tilt the reflecting surface of OPFE 126 by up to ⁇ degrees from the rest point (where exemplary ⁇ may be up to 10, 20, 40 or 70 degrees). That is, actuator 128 may tilt or scan the OPFE and with it FOV T .
  • Actuator 128 may be for example a stepper motor, or a voice coil motor (VCM) as described for example in co-owned patent application PCT/IB2017/057706.
  • VCM voice coil motor
  • Wide camera 106 and Tele camera 104 face a vehicle front side and share at least some of their respective FOVs.
  • FOV W is directed away from the vehicle toward the front direction (driving direction) and is substantially symmetrical vs. the two sides of the vehicle.
  • the Tele camera is operational to scan the Tele FOV (FOV T ) inside the Wide FOV (FOV W ) using actuator 128 .
  • the scanning of FOV T is for bringing the Tele camera to view more closely a detected potential object-of-interest (OOI), detected previously from Wide and/or Tele images, see in more detail below.
  • OOI object-of-interest
  • FIG. 2 shows schematically a use case of the system 100 of FIG. 1A .
  • a dual-camera 110 is installed in a front part of a vehicle 102 .
  • a triangle 204 represents FOV W in a horizontal plane, i.e. as a horizontal FOV W or “HFVO W ”.
  • an “observation distance” 206 is defined as the maximal distance that allows system 100 using an image from the Wide camera to detect a potential OOI.
  • “OOI” may be for example a hazard, another vehicle, a hole or obstruction on a road, a pedestrian, a road curve, a road sign, etc.
  • an “identification distance” 208 is defined as the minimal distance that allows system 100 using an image from the Wide camera to identify all the required information for making a decision, as known in the art.
  • the OOI may be a road sign observable but not readable in the observation distance.
  • an OOI may be observed in the observation distance, but identification or distinction between it being a road sign or a pedestrian is made only within the identification distance.
  • system 100 may use an image from the Wide camera to calculate that the OOI is located in FOV W , but not to fully calculate required measures-of-action or response needed (see next).
  • measures-of-action or responses of system 100 may include one or more or a combination of the following: changing vehicle 102 speed and/or course, operating an internal alarm to the vehicle driver, operating an external alarm, sending data information to, or calling Internet/cloud based service/police/road assistance services, etc.
  • a triangle 210 represents FOV T in a horizontal plane, i.e. as a horizontal FVO T (HFVO T ).
  • HFOV W may be in the range of 70-180 degrees and HFOV T may be in the range of 15-45 degrees.
  • HFOV W may be in the range of 140-180 degrees and HFOV T may be in the range of 15-70 degrees.
  • the output images of the Tele camera may have higher resolution than the output images of the Wide camera.
  • the output image of the Tele camera may have 3 to 20 times more resolution than the output image of the Wide camera, and consequently identification distance 212 of the Tele camera may be 3 to 20 times further away than identification distance 208 of the Tele camera.
  • OOI 202 is located between observation distance 206 and identification distance 208 . While OOI 202 is observable by the Wide camera, it may not identifiable (namely the Wide camera captures OOI 202 with too low a resolution to identify, classify or handle, relative to the required by system 100 ). As shown in FIG. 2( b ) , POV T is then scanned to face OOI 202 such that the Tele camera may capture OOI 202 with more detail (e.g. “identify” it).
  • FIG. 3 shows a detailed flow chart of a method of operation of system 100 as in the example of FIG. 2 :
  • FIG. 4A shows an embodiment of a system numbered 400 installed in, or attached to a vehicle 402 .
  • vehicle 402 may have a steering wheel 416 .
  • handlebars (not shown) may replace a steering wheel, with the following description being relevant to both.
  • system 400 comprises only a Tele camera 404 (similar to Tele camera 102 ) and a processing unit 408 (similar to processing unit 108 ).
  • FIG. 4B shows an embodiment of another system numbered 400 ′, similar to system 100 i.e. comprising a Wide camera 406 in addition to Tele camera 404 .
  • the description below refers to systems 400 and 400 ′.
  • the Tele camera faces the vehicle front side.
  • Tele camera 404 is operational to change angle/direction of POV T as marked by an arrow 502 , thereby achieving an “effective” FOV marked FOV E , which is larger than FOV T .
  • a processing unit constantly commands the Tele camera to continually change the POV direction or angle from left to right and vice-versa ( 602 ), and the Tele camera rotates according to the commands received ( 604 ).
  • the processing unit follows the steering wheel or handle bars: when the user turns the steering wheel/handle bar to the left ( 612 ), the Tele camera POV (POV T ) moves to the left, and when the user turns the steering wheel/handle bar to the right, POV T moves to the right.
  • the processing unit may use image recognition algorithm to identify road curves and change FOV T to follow the road.
  • FIG. 7A shows another shows an embodiment of another system numbered 700 (similar e.g. to system 400 ) that may be installed in, or attached to a vehicle 702 .
  • Vehicle 702 comprises a vehicle cabin 716 .
  • FIG. 7B shows an embodiment of yet another system numbered 700 ′, similar to system 100 and including a Wide camera 706 (like camera 106 ) in addition to Tele camera 704 .
  • Wide camera 706 may also be installed in vehicle cabin 716 to face OOI 802 .
  • FIG. 8 shows a vehicle cabin section and use case of a system in FIGS. 7A and 7B .
  • Wide camera 706 is not shown.
  • Tele camera 704 with FOV T faces the interior of vehicle cabin 716 and an OOI 802 , for example a passenger.
  • Tele camera 704 may be scanned to allow the effective FOV (FOV E ) larger than FOV T .
  • FIG. 9A shows in a flow chart main steps of a method of use of system 700 .
  • Tele camera 704 is operational to change angle/direction and scan vehicle cabin 716 .
  • Processing unit 708 is operational to identify an OOI 802 (e.g. passenger body, face, eyes, etc.).
  • Processing unit 708 is further operational to direct Tele camera 704 to face OOI 802 .
  • the data obtained by the Tele camera is used for identifying hazards (e.g. driver not looking at the road, driver falling asleep, passengers without seatbelts, a child without a child seat, etc.).
  • FIG. 9B shows in a flow chart main steps of a method of use of system 700 ′.
  • the processing unit uses data from both Wide and Tele cameras to direct the Tele camera to OOI 802 .
  • FIG. 10A shows an embodiment of yet another system disclosed herein and numbered 1000 .
  • System 1000 comprises a Tele camera 1002 , a Wide camera 1004 and a processing unit 1006 and may be used for surveillance, thus being also named “surveillance camera”.
  • Tele camera 1002 and Wide camera 1004 are part of a dual-camera 1010 . These components may be similar to or even identical with Wide and Tele cameras and processors described in embodiments above.
  • Surveillance camera 1000 and processing unit 1006 may include software and algorithms to detect OOIs (for example human faces) in FOV W and to steer the Tele camera in X and Y directions in Wide images to these OOIs to enhance the image or video quality of these objects and to enable their analysis (e.g. for face recognition in the case where the object is a face).
  • OOIs for example human faces
  • processing unit 1006 may instruct Tele camera 1002 to continuously scan parts of FOV W .
  • processing unit 1006 may instruct Tele camera 1002 to move to a specific location (as in FIG. 9 ).
  • the tilting and settling time of the prism may occur in 5-50 msec.
  • Tele camera 1002 may switch from pointing from one region of interest (ROI) to another every 1 sec, or at a faster or slower pace.
  • FIG. 10B shows an example of an imaged scene acquired by Wide camera 1004 and then digitally zoomed
  • FIG. 10C shows an example of an imaged scene acquired by Wide camera 1004 (left side) and then by a directed Tele camera 1002 to optically zoom on the ROI (right side).
  • the zoomed image in FIG. OC shows significant resolution gain over the digitally zoomed image in FIG. 10B , allowing for example facial recognition of people in the ROI.
  • Wide and Tele images and/or video streams may be recorded during automatic tracking mode and may be fused together to form a composite image or a composite video stream, as known in the art. This fusion may be applied on a camera hosting device (e.g. a mobile electronic device of any type that includes a system or camera disclosed herein). Alternatively, Wide and Tele images or video streams may be uploaded to the cloud for applying this fusion operation. Each composite image may also have the same FOV, by scanning with the Tele camera, stitching a plurality of Tele images to provide a “stitched” Tele image, then fusing the stitched Tele image with a Wide image.
  • the Wide image captures the entire scene simultaneously, while the Tele images to be stitched together are consecutive, so one can overcome motion or occlusions in the scene if required.
  • the stitching of the Tele images and/or the fusion of the stitched Tele image with the Wide image may also be performed in a cloud.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Data Mining & Analysis (AREA)
  • Vascular Medicine (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Studio Devices (AREA)
  • Mathematical Physics (AREA)
  • Transportation (AREA)
US16/978,690 2018-07-04 2019-07-04 Cameras with scanning optical path folding elements for automotive or surveillance applications Pending US20210120158A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/978,690 US20210120158A1 (en) 2018-07-04 2019-07-04 Cameras with scanning optical path folding elements for automotive or surveillance applications

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201862693951P 2018-07-04 2018-07-04
US16/978,690 US20210120158A1 (en) 2018-07-04 2019-07-04 Cameras with scanning optical path folding elements for automotive or surveillance applications
PCT/IB2019/055734 WO2020008419A2 (en) 2018-07-04 2019-07-04 Cameras with scanning optical path folding elements for automotive or surveillance applications

Publications (1)

Publication Number Publication Date
US20210120158A1 true US20210120158A1 (en) 2021-04-22

Family

ID=69060231

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/978,690 Pending US20210120158A1 (en) 2018-07-04 2019-07-04 Cameras with scanning optical path folding elements for automotive or surveillance applications

Country Status (5)

Country Link
US (1) US20210120158A1 (ko)
EP (1) EP3818405A4 (ko)
KR (2) KR20210003856A (ko)
CN (1) CN112272829A (ko)
WO (1) WO2020008419A2 (ko)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20220151451A (ko) * 2021-05-06 2022-11-15 삼성전자주식회사 복수의 카메라를 포함하는 전자 장치 및 그 동작 방법
CN113873201B (zh) * 2021-09-27 2023-09-15 北京环境特性研究所 一种超视距制高点反向观察系统及方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070159344A1 (en) * 2005-12-23 2007-07-12 Branislav Kisacanin Method of detecting vehicle-operator state
US20090234542A1 (en) * 2004-12-07 2009-09-17 Iee International Electronics & Engineering S.A. Child seat detection system
US20160295112A1 (en) * 2012-10-19 2016-10-06 Qualcomm Incorporated Multi-camera system using folded optics
US20160342095A1 (en) * 2014-02-21 2016-11-24 Carl Zeiss Smt Gmbh Mirror array
US20180295292A1 (en) * 2017-04-10 2018-10-11 Samsung Electronics Co., Ltd Method and electronic device for focus control
US10406972B2 (en) * 2017-02-24 2019-09-10 Tesla, Inc. Vehicle technologies for automated turn signaling

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0995194A (ja) 1995-09-29 1997-04-08 Aisin Seiki Co Ltd 車両前方の物体検出装置
US8709526B2 (en) * 2005-09-12 2014-04-29 E.I. Dupont De Nemours And Company Use of a high-oleic and high-tocol diet in combination with a non-tocol antioxidant for improving animal meat quality
KR101428042B1 (ko) * 2007-10-08 2014-08-07 엘지이노텍 주식회사 차량 카메라 제어 시스템
US9880560B2 (en) * 2013-09-16 2018-01-30 Deere & Company Vehicle auto-motion control system
US9863789B2 (en) * 2015-06-30 2018-01-09 Ford Global Technologies, Llc Rotating camera systems and methods
KR20230100749A (ko) * 2015-12-29 2023-07-05 코어포토닉스 리미티드 자동 조정가능 텔레 시야(fov)를 갖는 듀얼-애퍼처 줌 디지털 카메라
KR102609464B1 (ko) * 2016-10-18 2023-12-05 삼성전자주식회사 영상을 촬영하는 전자 장치

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090234542A1 (en) * 2004-12-07 2009-09-17 Iee International Electronics & Engineering S.A. Child seat detection system
US20070159344A1 (en) * 2005-12-23 2007-07-12 Branislav Kisacanin Method of detecting vehicle-operator state
US20160295112A1 (en) * 2012-10-19 2016-10-06 Qualcomm Incorporated Multi-camera system using folded optics
US20160342095A1 (en) * 2014-02-21 2016-11-24 Carl Zeiss Smt Gmbh Mirror array
US10406972B2 (en) * 2017-02-24 2019-09-10 Tesla, Inc. Vehicle technologies for automated turn signaling
US20180295292A1 (en) * 2017-04-10 2018-10-11 Samsung Electronics Co., Ltd Method and electronic device for focus control

Also Published As

Publication number Publication date
EP3818405A4 (en) 2021-08-04
CN112272829A (zh) 2021-01-26
EP3818405A2 (en) 2021-05-12
KR20240026457A (ko) 2024-02-28
WO2020008419A3 (en) 2020-07-02
KR20210003856A (ko) 2021-01-12
WO2020008419A2 (en) 2020-01-09

Similar Documents

Publication Publication Date Title
US10899277B2 (en) Vehicular vision system with reduced distortion display
US11472338B2 (en) Method for displaying reduced distortion video images via a vehicular vision system
CN110178369B (zh) 摄像装置、摄像系统以及显示系统
JP5619873B2 (ja) 車両の運転を支援するための装置
US8130270B2 (en) Vehicle-mounted image capturing apparatus
JP5953824B2 (ja) 車両用後方視界支援装置及び車両用後方視界支援方法
US11244173B2 (en) Image display apparatus
US9025819B2 (en) Apparatus and method for tracking the position of a peripheral vehicle
CN110857057B (zh) 车辆用周边显示装置
JP6644071B2 (ja) ドライバー・アシスタント・システム
US11508156B2 (en) Vehicular vision system with enhanced range for pedestrian detection
US20210331680A1 (en) Vehicle driving assistance apparatus
US20210120158A1 (en) Cameras with scanning optical path folding elements for automotive or surveillance applications
US11273763B2 (en) Image processing apparatus, image processing method, and image processing program
JP6653456B1 (ja) 撮像装置
JP5445881B2 (ja) 車両周辺撮影装置
JP2017181634A (ja) 撮像装置及び車載カメラシステム
JP2007299047A (ja) 運転支援装置
CN109421599B (zh) 一种车辆的内后视镜调节方法和装置
JP2016136326A (ja) 情報表示装置および情報表示方法
CN111186375B (zh) 车辆用电子镜系统
JP6555240B2 (ja) 車両用撮影表示装置及び車両用撮影表示プログラム
WO2013062401A1 (en) A machine vision based obstacle detection system and a method thereof
US11780368B2 (en) Electronic mirror system, image display method, and moving vehicle
JP2023066813A (ja) 画像処理装置、画像処理方法およびプログラム

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

AS Assignment

Owner name: COREPHOTONICS LTD., ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHABTAY, GAL;BRIMAN, ERAN;FRIDMAN, ROY;AND OTHERS;REEL/FRAME:059977/0662

Effective date: 20201101

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION