US20150022664A1 - Vehicle vision system with positionable virtual viewpoint - Google Patents

Vehicle vision system with positionable virtual viewpoint Download PDF

Info

Publication number
US20150022664A1
US20150022664A1 US14/372,524 US201314372524A US2015022664A1 US 20150022664 A1 US20150022664 A1 US 20150022664A1 US 201314372524 A US201314372524 A US 201314372524A US 2015022664 A1 US2015022664 A1 US 2015022664A1
Authority
US
United States
Prior art keywords
driver
vehicle
vision system
gesture
operable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/372,524
Inventor
Goerg Pflug
Achim Gieseke
Bernhard Thaler
Christian Traub
Johannes Wolf
Joern Ihlenburg
Martin Rachor
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Magna Electronics Inc
Original Assignee
Magna Electronics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201261588833P priority Critical
Priority to US201261602878P priority
Priority to US201261678375P priority
Application filed by Magna Electronics Inc filed Critical Magna Electronics Inc
Priority to US14/372,524 priority patent/US20150022664A1/en
Priority to PCT/US2013/022119 priority patent/WO2013109869A1/en
Publication of US20150022664A1 publication Critical patent/US20150022664A1/en
Application status is Abandoned legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed circuit television systems, i.e. systems in which the signal is not broadcast
    • H04N7/181Closed circuit television systems, i.e. systems in which the signal is not broadcast for receiving images from a plurality of remote sources
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R11/00Arrangements for holding or mounting articles, not otherwise provided for
    • B60R11/04Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with three-dimensional environments, e.g. control of viewpoint to navigate in the environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for entering handwritten data, e.g. gestures, text
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/105Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using multiple cameras
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/30Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing
    • B60R2300/303Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of image processing using joined images, e.g. multiple camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/60Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective
    • B60R2300/602Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by monitoring and displaying vehicle exterior scenes from a transformed perspective with an adjustable viewpoint

Abstract

A vehicle vision system includes a plurality of cameras disposed at a vehicle and having respective exterior fields of view and a display screen for displaying images derived from captured image data in a surround view format where captured image data is merged to provide a single composite display image from a virtual viewing position. A gesture sensing device is operable to sense a gesture made by the driver of the vehicle. A control provides a selected displayed image for viewing by the driver to assist the driver during a particular driving maneuver. The control is responsive to sensing by the gesture sensing device, whereby the driver can adjust the displayed image by at least one of (a) touch and (b) gesture to adjust at least one of (i) a virtual viewing location, (ii) a virtual viewing angle, (iii) a degree of zoom and (iv) a degree of panning.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • The present application claims the filing benefits of U.S. provisional application Ser. No. 61/678,375, filed Aug. 1, 2012; Ser. No. 61/602,878, filed Feb. 24, 2012; and Ser. No. 61/588,833, filed Jan. 20, 2012, which are hereby incorporated herein by reference in their entireties.
  • FIELD OF THE INVENTION
  • The present invention relates to imaging systems or vision systems for vehicles.
  • BACKGROUND OF THE INVENTION
  • Use of imaging sensors in vehicle imaging systems is common and known. Examples of such known systems are described in U.S. Pat. Nos. 5,949,331; 5,670,935; and/or 5,550,677, which are hereby incorporated herein by reference in their entireties.
  • SUMMARY OF THE INVENTION
  • The present invention provides a vision system or imaging system for a vehicle that utilizes one or more cameras to capture images exterior of the vehicle, and provides the communication/data signals, including camera data or image data that may be displayed or processed to provide the desired display images and/or processing and control, depending on the particular application of the camera and vision or imaging system. The present invention provides a touch screen or user input that allows the driver of the vehicle to selectively adjust the displayed images, such as to adjust the virtual viewing angle or viewing point or to zoom or pan the image, in order to provide the desired displayed images to the driver for the particular driving condition or scenario.
  • According to an aspect of the present invention, a vehicle vision system includes a plurality of cameras at the vehicle and having exterior fields of view (such as forwardly, rearwardly and sidewardly of the vehicle) and a display screen for displaying images captured by the cameras, such as in a top view or surround view format (where the images are merged or synthesized to provide a single composite display image from a virtual viewing angle). The system includes a control that is operable to adjust the virtual viewing point or virtual viewing angle or the degree of zoom or degree of panning or the like of the displayed images to provide a desired or appropriate display to the driver of the vehicle to assist the driver of the vehicle during a particular driving maneuver or operation. The control is responsive to a touch screen or a gesture interface or touch sensitive user input that detects a touch or approach of a driver's finger (either gloved or non-gloved) and allows the driver to adjust the displayed images by touching and/or moving one or more fingers at the touch screen to at least one of zoom or pan or adjust a virtual viewing point or adjust a virtual viewing angle or the like. The touch screen may have actuators for applying a haptic feedback to the driver's touch inputs.
  • These and other objects, advantages, purposes and features of the present invention will become apparent upon review of the following specification in conjunction with the drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a plan view of a vehicle with a vision system and imaging sensors or cameras that provide exterior fields of view in accordance with the present invention;
  • FIG. 2 is an example of the invention's solution showing a top view having an angle from the Z-direction to the rear (1) and several xyz-angled side views in split screens showing relevant tasks or hazards;
  • FIG. 2A illustrates a legend for reference in connection with FIGS. 3-17;
  • FIG. 3A is an example of an inventive embodiment of a touch screen controlling a driver assistant virtual view, with the case shown in which a tipping triggers a realigning) or redirecting of viewing angles of the virtual view, and with the virtual viewing direction and position of ‘before’ and ‘after’ doing the sliding action illustrated in the boxes on the side;
  • FIG. 3B is a schematic of a screen of the present invention, showing an activation of a feedback actuator pad at the position where the touch activity has occurred, with the screen having a relatively low resolution of grid of actuators for clarity;
  • FIG. 3C is a schematic of a screen of the present invention, showing a feedback actuator pad having a higher resolution as compared to the screen of FIG. 3B;
  • FIG. 3D is a perspective view schematic of a pad actuator, when a current is applied;
  • FIG. 3E is a sectional view of the pad actuator of FIG. 3D, showing the inner coil (2) acting against the outer coil (1), which leads to an opposing force;
  • FIG. 3F is a plan view of a screen showing a touch region or “soft button” that has haptic feedback actuator needles that extend so that the button protrudes out of the plane of its surroundings;
  • FIG. 3G is a plan view of the screen of FIG. 3F, showing the button's structure as an inverted structure, which may occur when the user actuates the touch region so that it feels like the button is depressed by the applied touching force;
  • FIG. 3H is a schematic of a touch screen actuator pad of the present invention, utilizing CNMs actuator strings set up in a staggered meshwork when no muscle actuation is controlled;
  • FIG. 3I is the touch screen actuator pad of FIG. 3H, showing the muscles in region (D) controlled to actuation, such that the screen bends outbound in that region;
  • FIG. 3J is a schematic perspective view of the region (D) of the touchscreen actuator pad of FIG. 3I;
  • FIG. 4 is an example of a case of the invention's embodiment, showing the TRAVERSING of the virtual viewing point controlled by a two finger slide sidewards, with the virtual viewing direction and position of ‘before’ and ‘after’ doing the sliding action illustrated in the boxes on the side;
  • FIG. 5 is an example of a case of the invention's embodiment, showing the changing of the viewing HEIGHT of the virtual viewing point controlled by a two finger slide upwards, with the virtual viewing direction and position of ‘before’ and ‘after’ doing the sliding action illustrated in the boxes on the side;
  • FIG. 6 is an example of a case of the invention's embodiment, showing the DEPARTING, so the increase of view distance to an object of the virtual viewing point controlled by a two finger slide in direction of one another, with the virtual viewing direction and position of ‘before’ and ‘after’ doing the sliding action illustrated in the boxes on the side;
  • FIG. 7 is an example of a case of the invention's embodiment, showing the CLOSING UP, where the decrease of view distance to an object of the virtual viewing point is controlled by a two finger sliding away from each other, with the virtual viewing direction and position of ‘before’ and ‘after’ doing the sliding action illustrated in the boxes on the side;
  • FIG. 8 is an example of a case of the invention's embodiment, showing the NICKING and LATERAL turning of the virtual view point in one consecutive one finger sliding action, with the virtual viewing direction and position of ‘before’ and ‘after’ doing the sliding action illustrated in the boxes on the side;
  • FIG. 9A is an example of a case of the invention's embodiment, showing the TILT, sidewards turning of the virtual view point controlled by a two finger sliding action, with the virtual viewing direction and position of ‘before’ and ‘after’ doing the sliding action illustrated in the boxes on the side;
  • FIG. 9B is an example of a case of the invention's embodiment, showing the TILT, sidewards turning of the virtual view point controlled by a ROL gesture command, with the virtual viewing direction and position of ‘before’ and ‘after’ executing the ROL command illustrated in the boxes at the lower portion of FIG. 9B;
  • FIG. 9C is an example of PICK gesture command, closing the index finger and thumb finger tips of the right hand seen from underneath, with a real hand's image shown side by side a schematized hand at the same position reduced to the basics that the gesture algorithm may compute to discriminate relevant phalanges, with each joint reduced to a point;
  • FIG. 9D shows a schematized hand such as like in FIG. 9C, showing a 90 degree turning gesture as a LOCK gesture;
  • FIG. 9E shows a schematic of the two dimensional function menu controllable by three gestures, and the left row tells the order of the commands necessary to end at the choice of ‘Free virtual camera view’;
  • FIG. 9F shows the normal condition of a projected key area with a closing finger further than 2 cm from the key area;
  • FIG. 9G shows the projected key area from FIG. 9F in a stimulated condition with a closing finger within a distance of 2 cm, with the projection illumination, borderline size and font width increased.
  • FIG. 9H shows the projected key area from FIGS. 9F and 9G in a switched condition with a finger just tipping at the key areas surface, with the projection illumination changing the color (from white to red) and the keys inside changing from deilluminated to green;
  • FIG. 10 is an example of a state of the art scene such as known from European Publication No. EP000002136346A2, showing the subject vehicle within the gray oval (I) and wireless transmitted parking spot positions a and b, transmitted by one or more other vehicles having appropriate means for measuring out the parking gap distance and transferring it's coordinates;
  • FIGS. 11A and 11B show examples of parking gaps, where there may be gaps which are big enough in size, but still not relevant to consider, such as shown in the example of FIG. 11A, with the example of FIG. 11B showing a relevant parking gap, with both FIGS. 11A and 11B corresponding to the parking spots marked with (a) and (b) in FIG. 10;
  • FIG. 12 is an example of an inventive embodiment of a display system, preferably a touch screen, showing an advance to what is shown in FIG. 10 and known systems, not only are the parking spot coordinates transmitted, but also visual data, with the scenery at and around parking spots captured by vehicle vision cameras from vehicles which have passed the potential parking spots, are also transmitted, and shown with (a) and (b) becoming transferred as well and presented to the driver for consideration, and with the potential driving path also shown to the driver of the vehicle;
  • FIG. 13 is a consecutive example to the inventive embodiment from FIG. 12, with the user providing a touch action (or approach or proximity of the driver's finger) onto the scene area (b), which triggers the choice so the ENTER(-ring) of scene b;
  • FIG. 14 is a consecutive example to an inventive embodiment from FIG. 13, showing that, when triggering/ENTER(-ring) of scene (b), the icon like parking lot scene becomes enlarged to the major part of the display for exploration by the driver;
  • FIG. 15 is a consecutive example to an inventive embodiment from FIG. 14, showing that the driver may input the two finger slide command for CLOSING up or zooming in;
  • FIG. 16 is a consecutive example to FIG. 15 of the invention's embodiment, with the virtual viewing point driven forward, which closes up the scene for deeper inspection;
  • FIG. 16A is a consecutive example to FIG. 16, showing a park box in the size of the host vehicle being projected (overlaid) to the displayed parking spot scene;
  • FIG. 17 is an example of an inventive embodiment, with the scenery of the gray oval areas I, II and III captured by vision cameras lately or recently, and with the area outside the oval not captured lately, and with this data consisting of mostly historical satellite and mapping data from providers like GOOGLE STREET VIEW™, and when departing a scenery, first the vision camera's images, then the (historical) street view data and then satellite and/or map data are displayed;
  • FIGS. 18A and 18B illustrate the maximum height which may be displayed by a real time top view, before switching to historical data when increasing the height or distance;
  • FIG. 19 is a schematic illustration of a two dimensional (2D) imposter which are set up in horizontal lines, with the horizontal shells having the same distance, and with the lines on the horizon appearing closer to each other than the closer ones from the slightly elevated viewing angle;
  • FIG. 20 is the same scenery as shown in FIG. 19, with the view being elevated such that the viewing angle in FIG. 20 is substantially or fully vertical;
  • FIG. 21 is different from FIGS. 19 and 20, and shows the virtual view point as being identical to the virtual projector position and direction;
  • FIG. 22 is the same as FIG. 21, with the exception that the 2D imposter or at least its lower end becomes smoothly turned instead of a sharp edge;
  • FIG. 23A shows a human image that is captured by the vision system camera(s) as a relevant object as being projected onto a (single) imposter, with the virtual view point slightly elevated looking down;
  • FIG. 23B is similar to FIG. 23A, but shows that the virtual view point has risen (either by manual input, automated by the system or because the viewer has elevated his viewing angle by raising his or her head which was captured by a head/eye tracking system) in comparison to FIG. 23A, and the virtual projection pane (the imposter) has been bent down further to stay orthogonal;
  • FIG. 23C is similar to FIG. 23A, but shows that the virtual view point has risen furthermore in comparison to FIGS. 23A and 23B, and shows that the virtual projection pane (the imposter) has been bent down to nearly horizontal;
  • FIG. 24A is a schematic of an inventive embodiment of the present invention, which does not arrange the imposters in parallel lines of distances but arranges the distance lines (layers) of the imposters circumferentially around the view point like an onion shell and so as to always face the surfaces orthogonal to the user's view point (user's head), with the scenery in the condition of t0 (before movement or at an initial time frame);
  • FIG. 24B is an illustration of the 3D parallax effect in accordance with the present invention, showing that at times when the virtual view point is shifting forward (x direction) (compare to FIG. 24A and FIG. 24C), the 2D imposters move against the viewer, with t0 being the timestamp of beginning the movement and t1 being the timestamp after some movement or at the finish or completion of the movement;
  • FIG. 24C is an illustration of the scenery as in the condition of t1 according the timestamps of FIG. 24B (after movement);
  • FIG. 25 shows a coil having several layers made with Faltflex® produced by Würth Elektronik GmbH & Co. KG, Germany;
  • FIG. 26 shows a PCB integrated coil produced by Würth Elektronik GmbH & Co. KG, Germany;
  • FIG. 27 shows a virtual top view, with the source images taken from the vehicle wide angle view cameras 14 a, 14 b, 14 c, 14 a, from FIG. 1 for composing the virtual image which is projected to a bowl shape virtual projection pane, shown with the view to the rear, shifted to the front left in an angle; and
  • FIG. 28 is a two dimensional (2D) schematic of the virtual top view shown in FIG. 27, with the virtual bowl shape projection pane (7) shown, and with the elevated view from the rear of the virtual camera (or virtual viewpoint) (8) schematized and the camera's (14 a, 14 b, 14 c) viewing angle schematized in gray as well.
  • DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • A driver assist system and/or vision system and/or object detection system and/or alert system may operate to capture images exterior of the vehicle and process the captured image data to detect objects at or near the vehicle and in the predicted path of the vehicle, such as to assist a driver of the vehicle in maneuvering the vehicle in a rearward direction. The object detection may utilize detection and analysis of moving vectors representative of objects detected in the field of view of the vehicle camera, in order to determine which detected objects are objects of interest to the driver of the vehicle, such as when the driver of the vehicle undertakes a reversing maneuver.
  • Referring now to the drawings and the illustrative embodiments depicted therein, a vehicle 10 includes an imaging system or vision system 12 that includes one or more imaging sensors or cameras (such as a rearward facing imaging sensor or camera 14 a and/or a forwardly facing camera 14 b at the front (or at the windshield) of the vehicle, and/or a sidewardly/rearwardly facing camera 14 c, 14 b at the sides of the vehicle), which capture images exterior of the vehicle, with the cameras having a lens for focusing images at or onto an imaging array or imaging plane of the camera (FIG. 1). The vision system 12 is operable to process image data captured by the cameras and may provide displayed images at a display device 16 for viewing by the driver of the vehicle. Optionally, the vision system may process image data to detect objects, such as objects to the rear of the subject or equipped vehicle during a reversing maneuver, or such as approaching or following vehicles or vehicles at a side lane adjacent to the subject or equipped vehicle or the like.
  • Driver assistant vehicle vision systems featuring virtual top views are known (such as described in U.S. Pat. No. 7,161,616 and/or PCT Application No. PCT/US10/25545, filed Feb. 26, 2010 and published on Sep. 2, 2010 as International Publication No. WO 2010/099416, and/or PCT Application No. PCT/US10/47256, filed Aug. 31, 2010 and published Mar. 10, 2011 as International Publication No. WO 2011/028686, and/or PCT Application No. PCT/US11/62834, filed Dec. 1, 2011 and published Jun. 7, 2012 as International Publication No. WO 2012-075250, and/or PCT Application No. PCT/US2012/048800, filed Jul. 30, 2012 (Attorney Docket MAG04 FP-1908(PCT)), and/or PCT Application No. PCT/CA2012/000378, filed Apr. 25, 2012 (Attorney Docket MAG04 FP-1819(PCT)), and/or PCT Application No. PCT/US2012/064980, filed Nov. 14, 2012 (Attorney Docket MAG04 FP-1959(PCT)), and/or PCT Application No. PCT/US2012/068331, filed Dec. 7, 2012 (Attorney Docket MAG04 FP-1967(PCT)), and/or U.S. patent application Ser. No. 13/333,337, filed Dec. 21, 2011 (Attorney Docket DON01 P-1797), and/or U.S. provisional application Ser. No. 61/613,651, filed Mar. 21, 2012, and/or German Publication Nos. DE102009025205A1 and DE102010010912A1, and/or European Publication No. EP000002136346A2, which are all hereby incorporated herein by reference in their entireties). Such top view systems are used for assisting the driver while backing up the vehicle, filling in blind spots for safe turning of the vehicle and/or the like.
  • It is known to generate a top view by synthesizing images captured by multiple vehicle incorporated cameras in a bowl like shape (such as described in U.S. Pat. No. 7,161,616, such as at FIG. 33 and FIG. 50 of U.S. Pat. No. 7,161,616). It is also known to use different z-x angles of the virtual viewpoint looking at the car body (such as described in U.S. Pat. No. 7,161,616, such as at FIGS. 20A-20D of U.S. Pat. No. 7,161,616), and/or to have a view from the top (z direction) to specific sections (such as described in German Publication No. DE102009025205A1, such as at FIG. 4, and/or U.S. Pat. No. 7,161,616, such as at FIG. 27D of U.S. Pat. No. 7,161,616) and/or having virtual view point highs (z-direction), such as described in U.S. Pat. No. 7,161,616, such as at FIGS. 19A-19F of U.S. Pat. No. 7,161,616).
  • The system described in U.S. Pat. No. 7,161,616, provides or includes preselected virtual viewpoints which mappings are precalculated and stored in look up tables. All virtual viewpoint angles are tilted in a x-z angle, no angles other than 0 degrees are provided into y-direction. Views onto other preselected regions beside the center of the vehicle body are disclosed in German Publication No. DE102009025205A1, but there is no tilt angle in the y-direction involved, just a shift of the virtual viewpoint in x-y-z direction.
  • Procedures for adapting the virtual projection plane according the virtual viewpoint's elevation for receiving plausible projection views are also described in PCT Application No. PCT/US2012/048110, filed Jul. 25, 2012 (Attorney Docket MAG04 FP-1907(PCT)), which is hereby incorporated herein by reference in its entirety.
  • Overlays and highlighting of hazards around the vehicle are also described in German Publication No. DE102010010912A1 and/or U.S. Pat. No. 7,161,616, which are hereby incorporated by reference in their entireties.
  • Receiving remote destination data, particularly parking spot coordinates is described in European Publication No. EP000002136346A2. The disclosure of EP000002136346A2 does not reveal any intention or method for judging the feasibility or quality of the pointed parking spots. There may be gaps which are big enough in the size, but still not relevant to consider, due to areas where parking is prohibited or impossible.
  • Motion parallax (or parallax scrolling) is a natural optical effect. It can be used artificially when trying to give a more or less flat scene a depth view impression comparable to a stage having flat paper coulisses in the foreground and background of a scene play. More technically, but in a similar manner, this effect can be used in computer games (see, for example, http://en.wikipedia.org/wiki/Parallax_scrolling). Such parallax scrolling is a special scrolling technique typically used in computer graphics, wherein background images move by the camera slower than foreground images, creating an illusion of depth in a 2D video game and adding to the immersion. See also, the short video at ‘http://www.youtube.com/watch?v=Jd3-eiid-Uw’, which shows a so called ‘fake 3D’ effect based on motion parallax under synchronization to the viewer's head or eye movement. For such a fake 3D effect, an appliance is required for tracking the head of the watching person. On conventional single 2D display systems just one person can enjoy the fake 3D effect at a time. Parallax mapping (see, for example, http://en.wikipedia.org/wiki/Parallax_mapping), which is also referred to as offset mapping or virtual displacement mapping, is an enhancement of the bump mapping or normal mapping techniques applied to textures in 3D rendering applications such as video games.
  • In vehicle systems, a vision system may be operable to track the driver's eye, such as for determining a gaze direction or determining drowsiness of the driver. For example, systems are used for detecting the driver's drowsiness are disclosed in U.S. Publication No. US-2005/0163383 (which is hereby incorporated herein by reference in its entirety), and systems for controlling airbag deployment are disclosed in U.S. Publication No. US-2004/0085448 (which is hereby incorporated herein by reference in its entirety), or a vision system may adjust the view or alignment on head up display overlays to the outside scene, or may control airbag deployment or control a camera and/or illumination source of the vehicle (such as described in U.S. Pat. No. 7,914,187, which is hereby incorporated herein by reference in its entirety) and/or the like.
  • Also, German Publication No. DE102009009047A1 (which is hereby incorporated herein by reference in its entirety) describes how a three dimensional (3D) scene can become segmented into two dimensional (2D) layers coulisses (so called 2D Imposters) positioned in different distances for a collision avoidance system, and this is meant for machine vision (data processing of image data) and not for human vision (displayed on a display screen for viewing by the driver of a vehicle). Later referred 2D imposters are known from computer games (see, for example, http://www.gamasupra.com/view/feature/2501/dynamic2d_imposters_a_simple_.php).
  • Driver assistant vision system virtual cameras are typically not independent in position and angle. The driver typically cannot control the virtual viewing position freely and easy. To suit all driving or environmental conditions, especially to show all hazards, the virtual view position and direction should be independent, preferably selectable in x-y-z height and tilt angle by the driver or automatically by the vision system control, in order to always provide the best view according the current condition. The virtual viewpoint should be intuitively steerable. The present invention provides a useful improvement to such vision systems, such as the system described in European Publication No. EP000002136346A2, incorporated above, in conjunction with vision system virtual cameras control.
  • The present invention provides a system that uses depth segmented 2D layer coulisses (2D imposters), such as described in DE102009009047A1, incorporated above, to provide a fake-3D effect in a vehicular surround view vision system based on tracking of finger and/or hand movement at or proximate a touch screen or touch sensitive device and/or tracking of the driver's head and accordingly motion parallax scrolling of the virtual view point.
  • Referring now to the drawings, the following provides a description of the driver assist system of the present invention:
  • (1) Instead of using pre-calculated mapping tables, the mappings of the virtual view become calculated in real time.
  • (2) Instead of providing a fully vertical top view on situations, hazards become provided within a close up or centering (maybe split) screen, and flexible, preferably freely or manually positionable views from an x-y-z angle come into use. Optionally, the system may capture images of one or more than one hazard at a time (see FIG. 2).
  • (3) The close up or centering itself draws the driver's attention to the hazard. The hazards may also become highlighted by overlays (see image 4 in FIG. 2) or by color change.
  • (4) The virtual viewpoint may be altered by gestures, such as hand gestures or finger gestures or the like. The gestures may be detected by suitable gesture sensing devices such as, for example (one, several or in combination):
      • a. Time of Flight sensor (as like PMD from PMD-Technologies®);
      • b. Stereo camera disparity detection;
      • c. Mono camera with pseudo stereo via motion disparity detection;
      • d. Structured light sensor (as like Microsoft Kinect®); and/or
      • e. Touch screen or touch sensitive device or proximity sensor.
        The detection devices may be installed inside the passenger compartment at a position suitable to detect the driver's gestures at most or all times, preferably integrated to the center glove compartment, the top light column or the central mirror mounting area. By utilizing an image-based or non-touch based gesture detection device, the gesture detection device may detect the gestures of a gloved hand as well as a non-gloved hand. Optionally, a touch sensitive device or proximity sensor may be utilized to detect and discern hand gestures by the driver of the vehicle, and optionally the touch sensitive device or proximity sensor may be operable to detect touch or proximity of a gloved hand or finger or fingers as well as a non-gloved or covered hand or finger or fingers.
  • The system may be operable to detect and discern various gestures and may associate various individually discernible gestures with various operations. For example, the system may detect and discern the following gestures for switching camera modes and controlling the camera and it's viewpoint:
      • ‘CHOOSE’ or ‘PICK’ or ‘ENTER’ (mode/control/enter functional group/sub function);
      • ‘ABORT’ mode;
      • ‘RECENTER’ viewpoint;
      • ‘RECALIBRATE’ viewpoint;
      • ‘REALIGN’ viewpoint;
      • ‘TRAVERSING’ (sideward) shifting viewpoint;
      • ‘CHANGING viewpoint's HEIGHT’;
      • ‘CLOSING UP’ viewpoint;
      • ‘DEPARTING’ viewpoint;
      • ‘NICKING VERTICAL’ viewing angle;
      • ‘NICKING LATERAL’ viewing angle;
      • ‘ROL’ view;
      • ‘TILT’ view;
      • ‘LOCK’ entry; and/or
      • ‘UNLOCK’ entry
        • (description similar as referred in (4)).
          ‘LOCK’ entry may be performed by a turning gesture of the right hand while having the index finger and the thumb closed (see, for example, FIG. 9C), exemplifying the turning of a key inside a key hole such as like shown in FIG. 9D. The ‘LOCK’ entry may function comparable to key pad lock functions on cell phones, here for gestures entries (not necessarily limited to automotive vision camera application). As soon as a user enters a gesture LOCK command, any subsequent or following gestures entered willingly or by mistake (use case) may be ignored except the ‘UNLOCK’ command or alternatively the system may be “unlocked” by a conventional (physically) button entry or speech command (known art speech command acknowledgement). The present invention thus provides two dimensional menus that are controllable by just three gestures:
      • ‘UP-DOWN’—rolling the menu downwards; such as, for example, by whipping from rearward to forward or vice versa;
      • ‘LEFT-RIGHT’—rolling the menu sidewards; such as, for example, by whipping from the right to the left or vice versa;
      • ‘PICK’—for picking a choice; such as, for example, by closing the index finger and thumb finger tips (such as shown in FIG. 9D) or by moving the index finger downwards or tapping, such as like when clicking a computer mouse.
        One dimension may be the (choice of one) general functional group as like:
      • Viewing driver assistant system;
      • Radio;
      • Navigation;
      • Phone;
      • Internet;
      • Heating, Ventilation, Air Conditioning System, Seat heating;
      • Seat position (actuator) control;
      • Window lifter control; and/or
      • Openers (Roof top/tilt window control, trunk actuator control, fuel lid and/or the like).
        The second dimension may be the (choice of one) specific feature that changes when wiping to an orthogonal direction then the general functional group dimension (see FIG. 9E) as like:
      • Sound volume (under the functional group ‘Radio’); such as, for example, becoming louder to the right;
      • Phone functions (under the functional group ‘Phone’); such as, for example, giving the choices one by one when whipping from left to the right:
        • Pick up;
        • Dial/Redial;
        • Phone Contacts;
        • Forward phone;
        • Configure.
      • Virtual Top View Camera (under the functional group ‘Viewing driver assistant system’); such as, for example, giving the choices one by one when whipping from left to the right:
        • Most recent hazard view;
        • Second recent hazard view (when whipping up down up after entering ‘Most recent Hazard view’);
        • Third recent hazard view;
        • Chase view (from the virtual rear);
        • Side view;
        • Free virtual camera view.
          For example, after ‘PICK’-ing the ‘Free virtual camera view’, the system may offer a control mode for the virtual camera which can be used intuitively by the driver. It may perform as like laying the right hand onto a globe's top surface. When emphasizing to role the hand forward or sideward, the virtual camera is set up to copy that rolling simultaneously (or in a scaled ratio) such as in the manner shown in FIG. 9B. By using the ‘PICK’ gesture (closing the index finger and thumb finger tips such as like shown in FIG. 9C) while moving the hand, the viewing angle may stay fixed but the virtual camera may move transversal forward, sideward and vertically. Opening the index finger and thumb finger tips while moving the hand downward may comprise the earlier mentioned ‘CLOSING UP viewpoint’ gesture. When the user carries out this gesture the virtual view may zoom up. The opposite ‘DEPARTING’ viewpoint gesture may effect a de-zooming.
  • (5) The virtual viewpoint may become altered by sliding and tapping or tipping on a touch screen.
      • a. The slides may follow an intuitive logic.
      • b. One, two and/or three finger slides and tapping action are dedicated to specific vision control groups:
        • i. Tapping may be dedicated as a ‘CHOOSE’, ‘ENTER’, ‘ABORT’ or ‘RECENTER’ or ‘RECALIBRATE’ or ‘REALIGN’.
          • 1. The realigning function may automatically turn the virtual view back to the vehicle's body and turns the views top upright.
        • ii. Sliding two fingers (mostly) horizontal may be dedicated as viewpoint ‘TRAVERSING’. The virtual view point moves side wards, without changing the viewing angle.
        • iii. Sliding two fingers (mostly) vertical might be dedicated as ‘CHANGING viewpoint HEIGHT’.
          • 1. Functions (ii) and (iii) may alternatively behave slightly different by fixing the aiming point that the virtual view looks at. The virtual view point turns in all three angles while TRAVERSING side wards or in HEIGHT or both.
        • iv. Sliding two fingers (mostly) away from each other be dedicated as ‘CLOSING UP’ (or Zooming in, but zooming is not fully identical to rolling a camera, and rolling is preferred). The virtual view point is rolling towards the aiming point looking at. The position changes, but not the viewing angle.
        • v. Sliding two fingers (mostly) in direction of one another be dedicated as ‘DEPARTING’. The virtual view point is rolling away (or De-Zoom) from the aiming point looking at. The position changes, but not the viewing angle.
        • vi. Sliding one finger (mostly) vertical may be dedicated as changing the viewpoint's ‘NICKING’ (vertical) angle.
        • vii. Sliding one finger (mostly) horizontal may be dedicated as changing the viewpoint's ‘LATERAL’ angle.
          • 1. NICKING and LATERAL angles may be combined to ‘ROL’ which serves both functions at the same time with one finger. The virtual view changes the viewing angle, but not the position during this function.
          • 2. The ROL(-ing) function may come with a behavior as if the rolling view has a kind of inertial mass and friction, so the already rolling view continues rolling in the same direction with decreasing speed when taking the sliding finger tips off the touch screen.
        • viii. Sliding one finger around another might be dedicated to ‘TILT’ the virtual view side wards (the third degree of freedom beside NICKING and LATERAL turning).
  • (6) Referring now to FIGS. 9F-H, the gesture interface may work in combination to physically present keys or keys projected onto a surface or a capacitive sensor key pad as another aspect of the invention and optionally utilizing the later referred haptical interface comprising incorporated pad or needle actuators or the like. When the user is closing (such as, for example, to within about 2 cm) to a key pad or key area or the like (any of above mentioned types), the projection illumination or backlight illumination may increase simultaneously or may change color, or it's shape may changes, or projected lines may increase in size or the key may increase in size or pop out (or pop out more) as like attracted from the user's closing finger (see FIGS. 9F and 9G). This provides the user the feedback that his or her intention is acknowledged. When striking and/or when lifting the button the key may change its projection illumination, or backlight illumination, such as, for example, a short flickering or inverting or turning on the center or may change color (again), or projected lines may decrease or increase in size or the key may decrease in size or pop in (as later referred, compare FIGS. 3F and 3G) or its shape may change as like rounding up the edges (more) (see FIG. 9H). After releasing the key, it's effect may fade out slowly (such as over about two seconds or thereabouts) to its normal state.
  • (7) When increasing the distance of the virtual view from the vehicle, by raising the viewpoints highs or distance, the scene captured by the vehicle's cameras is limited, so the more distant area must become filled.
      • a. This may happen by using image data of remote image providing facilities or vehicles connected by any kind of remote communication channel, or
      • b. by using historical image data, stored earlier, when passing the scene which is to project, or
      • c. by using image data out of a street and/or satellite view database like ‘GOOGLE STREET VIEW’™ and/or ‘GOOGLE EARTH’™ or the like.
        • i. This kind of top view departure and/or approach may be performed automatically.
          • 1 . . . . when turning on or switching of the vision system as a start up/end up animation or . . .
          • 2 . . . . in conjunction with navigation mapping to provide the actual vehicle's position onto a map (maybe relatively in a distance to a destination spot).
            • a. Also a camera side and/or top view of the parking spot scene may be provided and also a time stamp of when the spot was detected (see FIG. 12).
            • b. To determine whether the host vehicle is fitting into the optional parking spot, there may be a ‘park box’ overlayed to the parking spot scene, which has the same size that the host vehicle would take within that scene. The box may be semi transparent and may be adapted according to the virtual distance (such as shown in FIG. 16B). Additionally, the parking spot dimensions may be displayed as numbers as an additional overlay. Mismatching may be highlighted (such as in a selected color, such as red, and/or such as by flashing or blinking, and/or such as via hatched borderline bars of the park box and/or the text overlay). The park box may appear automatically or may be selectively triggered by the user, preferably by a touch onto a touch screen in the vehicle.
            • c. The gap may become rated in a best choice algorithm of elapsed time to when the parking gap was spotted, a degree of difficulty to reach the gap, a walking distance to destination, a driving distance to parking gap, a criminal or accident focus, a child safety when departing, a POI, parking costs, and/or the like.
        • ii. This kind of top view departing and/or closing may be controlled automatically or manually, such as via a user input or the like controlled by the driver or occupant of the vehicle.
  • (8) Optionally, the virtual viewpoint may become shifted according the driver's head movement.
      • a. The driver's head and/or eyes may be tracked by a surveillance system, which is preferably within the vehicle compartment, preferably fixed vis-à-vis the driver.
        • i. Optionally, the driver surveillance system may comprise a camera based system.
          • 1. The camera may use any suitable image algorithm that is capable to discriminate the position and distance of the driver's head and/or eyes relative to the camera.
          • 2. The camera may operate within visible light, and optionally, and desirably, within invisible wave lengths, such as infrared light wavelengths and/or near infrared light wavelengths.
          • 3. The system may include an active light source to illuminate the driver's contours so the surveillance system receives enough contrast to operate. Optionally, and desirably, the light source may comprise one or more infrared or near infrared light emitting light emitting diodes (LEDs).
        • ii. Optionally, the driver surveillance system may comprise a LASER based system.
          • 1. The LASER may actively scan a surveillance area in which the driver's head is typically found or located and/or may track the driver's head. An algorithm may form a cloud of scan point data, which may be processed in a manner where typical body markers are discriminated so these can be tracked, which enables the system to determine the driver's head position and distance relative to the LASER source.
      • b. The tracked driver's head position in relation to the position of the camera or LASER or sensor may be brought on according to the display's position by any suitable trajectory algorithm of the vision system.
      • c. At the time the driver moves his or her head, the virtual vision view point will be shifted accordingly. Due to correct parallax scrolling (fake 3D) such as from computer games, the depth segmented 2D layer coulisses known from DE102009009047A1 for machine vision (also known as ‘2D imposters’ known from computer games) become virtually shifted accordingly within the inventive automotive human vision embodiment. Accordingly, the fake 3D vision image on the display shifts over the display responsive to the driver's head movement.
      • d. The present invention thus turns dynamic 2D imposters orthogonal to the user's view, such as by using aspects of computer games.
      • e. In DE102009009047A1, the imposters layers are arranged in parallel lines of distances. The present invention provides a panoramic vehicle vision system that arranges the distance lines (layers/shells) of the imposters circumferential, onion-like shell around the virtual view point (equates to the virtual projector) so that the user always faces the surfaces orthogonal to the user's point of view (the user's head).
      • f. When raising the view point, the imposters tilt (bend) to the back still showing their front to the viewer.
        • i. The lower end may turn in a curved shape towards the center when the view point is raised.
      • g. According to the rules for having motion parallax at times when the virtual view point is shifting forward (x direction) (compare to FIGS. 24A to 24C) the 2D imposters move against the viewer. The close imposters move further and respectively faster as compared to the imposters in the background. The imposters in front move less than those at the side. The imposter may grow in size when they approach and shrink when they get more distant. The behavior is accordingly opposite when moving virtually away. These effects more or less apply when the view is mostly in a horizontal angle (x-z pane).
        • i. For economic reasons (including the processing load of the control or microprocessor), the imposter's size and mappings may not change continuously but may change in steps according to the distance and angle.
        • ii. The mappings may be simplified by not projecting the captured (and maybe transformed) images from the vision system's cameras itself than simplified symbols (such as a car, a person, a dog, a tree, a shopping cart, and/or the like). The symbols mappings may be offline calculated and stored in a look up table for every angle and size which is desired for use in the vision system.
      • h. The present invention thus provides a system that is operable to scroll the mappings on the 2D imposters surfaces according the viewing angle the driver is looking at. This is different to ‘parallax occlusion mapping’ also known from the computer game area (see, for example, http://en.wikipedia.org/wiki/Parallax_occlusion_mapping; and/or http://www.youtube.com/watch?v=gcAsJdo7dME&feature=related). Such parallax occlusion mapping is used to procedurally create 3D definition in textured surfaces, by using a displacement map (similar to a topography map) instead of through the generation of new geometry.
      • i. The present invention thus provides a vehicle vision system that uses ‘parallax occlusion mapping’ effects on high performance systems.
      • j. The present invention thus provides a vehicle vision system that uses ‘parallax mapping’ (or ‘parallax shading’) effects known from computer games (see, for example, http://en.wikipedia.org/wiki/Parallax_mapping), which may require less processor capacity than ‘parallax occlusion mapping’ (‘parallax mapping’ is faking shades onto where virtual gaps would be depending of the virtual light source's direction, and ‘occlusion mapping’ is additionally virtually hiding structures/profiles in the background which should become hidden by structures/profiles in the foreground but still just maps all that onto a flat projection surface, not calculating geometries of the structure/profile itself, and ‘parallax occlusion mapping’ requires more calculation power than ‘parallax shading’).
  • With reference to the drawings, FIG. 19 is a schematically illustration of 2D imposter which are set up in horizontal lines, with the horizontal shells having the same distance. The lines on the horizon appear closer to each other then the closer ones due to the slightly elevated viewing angle. In the illustrated embodiment, all imposter shells have the same height, with the more distant ones appearing smaller than the close ones. In this example, no real scenes are mapped (projected) by the virtual projector to the imposters surfaces. In the illustrated embodiment, the use is for human visualization. Such a vision system may utilize aspects described in DE102009009047A1, incorporated above, such as for machine vision purposes.
  • FIG. 20 is an illustration of the same scenery as in FIG. 19, with the view being elevated so that the viewing angle is now substantially or fully vertical. In cases where the imposters do not turn toward the viewer but stand upright, such as shown in FIG. 20, the (projected mapping) surfaces are not visible any more. The top view is more important for driver assistance vision systems than flat viewing angles.
  • FIG. 21 is different from FIGS. 19 and 20, and the virtual view point is identical to the virtual projector position and direction (typically in automotive human vision systems). To avoid that the projection pane, so the imposters turn out of the view when the virtual view point (and the projector) become elevated, the present invention (in vehicle vision) provides that imposters are bent back in a way that the surface always stays orthogonal or substantially orthogonal to the viewer (projector). As shown in FIG. 21, the example has the lower end of the imposter bent in at an edge. Such a system may utilize aspects of the systems described in PCT Application No. PCT/US2012/048110, filed Jul. 25, 2012 (Attorney Docket MAG04 FP-1907(PCT)), which is hereby incorporated herein by reference in its entirety, where it is disclosed that a 2D vision system may bend down fully enveloping bowl shape like projection panes, such as when the virtual view point becomes increasingly elevated. In the illustrated embodiment of FIG. 21, the projection pane is divided up in several projection panes, having different distances (see also FIG. 19).
  • FIG. 22 is identical to FIG. 21 with the exception that imposter or at least the lower end becomes smoothly turned instead of a sharp edge. This may improve the appearance of the transitions from bottom projections to vertical objects projections. FIG. 23A illustrates a human captured by the visions system camera(s) as a relevant object is being projected onto a (single) imposter. The virtual view point is slightly elevated looking down in FIG. 23A, while FIG. 23B shows the virtual view point after it has been risen (either by manual input, automated by the system or because the viewer has elevated his viewing angle by rising his head which was captured by head/eye tracking). The virtual projection pane (the imposter) has been bent down further to stay orthogonal. The mapped (projected) image of the human might have been altered according parallax shading. Additionally, occlusion mapping might have use in the system of the present invention. FIG. 23C shows that the virtual view point has been risen further in comparison to FIGS. 23A and 23B, such that the virtual projection pane (the imposter) has been bent down to nearly horizontal. The projection (mapping) may show the captured human more as like looking from overhead than from the front. Parallax shading and/or occlusion mapping may also find use in this situation.
  • FIG. 24A illustrates that the system of the present invention may arrange imposters or distance lines or layers of the imposters circumferentially around the view point (such as like layers of an onion) so as to always face the surfaces orthogonally to the user's view point or the user's head. This is different from what is shown in FIG. 19, which arranges the imposters in parallel lines of distances. The scenery shown in FIG. 24A is in the condition of t0 according to the timestamps of FIG. 24B (before move).
  • FIG. 24B is an illustration of the 3D parallax effect in accordance with the present invention. At times when the virtual view point is shifting forward (such as in the x direction), and as can be seen with reference to FIGS. 24A and 24C, the 2D imposters move relative to the viewer. The close imposter moves further and respectively faster as the imposters in the background, and the ones in front move less than those at the side. The imposter may grow when they advance and shrink when they get more distant. In the illustrated example, the virtual move forward may be manually controlled by a touch screen sliding command (such as discussed above and such as shown in FIG. 7), but may also or otherwise be automated by the system or may move because the viewer has moved his or her head forward and this movement was captured by a head/eye tracking system which controls the scenery movement online in all three dimension to provide the 3D parallax effect to the viewer within the 2D display. The to timestamp is the timestamp of beginning the movement and the t1 timestamp is the timestamp after movement or at the finish of the movement. FIG. 24C illustrates the scenery in the condition of the time t1 of FIG. 24B (after movement or after completion of the movement).
  • An entry area or preferably the (flexible) screen (underneath) may incorporate pad or needle actuators which preferably act in an orthogonal direction for providing an active haptic feedback to inputs of the driver or to actively form structural content that is haptically perceptible or conceptual such as the likes of (soft-) buttons or borderlines. The haptic feedback input may comprise any suitable haptic feedback, such as, for example, a “popping in” or depression of a soft button surface such as shown in FIGS. 3D and 3E. The structural area may comprise one or several actuators. The haptic conceivable embossed structure may also be capable of virtually moving over the screen, such as in the form of a transverse wave or the like, by employing actuator needles in the direct neighborhood or region at which the embossed structure shell virtually moves while releasing actuators in the back end of the virtual structure. Examples of haptic feedback systems and actuators are disclosed in U.S. Publication No. 2009/0244017, German Pat. No. DE102004037644, and International Publication No. WO2009117632 (which are hereby incorporated herein by reference in their entireties).
  • In the present inventions a solution of the actuator materials is provided. There are three materials within the range of choice:
      • Electroactive Polymers (EAPs);
      • (Electro-)Pneumatic Artificial Muscles (E-PAMs);
      • Carbon Nanotube Muscles (CNMs); and/or
      • Embedded coils.
  • An EAP named Vivitouch® (distributed by Artificial Muscle Inc.) is suggested to become used in vehicle control panels and mobile computing on their website: http://www.artificialmuscle.com/technology.php.
  • The present invention may utilize (E-)PAMs to actuate single needle (-like) actuators, such as shown in FIGS. 3F and 3G, or in pad (-like) actuators, such as shown in FIGS. 3B and 3C. For reaching a feasible longitudinal actuation range (such as about 1 mm or thereabouts) and/or force, the (E-)PAMs may be packed into a stack.
  • As soon as the feedback actuator density reaches a comparably high level, the touchable structures can become displayed in the haptic sense. In the example shown in FIG. 3F, a soft button or touch region is structurally formed by the haptic (feedback) actuator needles, such that the button emerges out of the level of it's surroundings within the pad. In this way, the touch region or button is haptically experienceable and discernible. As a reaction to a touch detection at the soft button's surface, and such as shown in FIG. 3F, the button's structure may be changed to an inverted structure. Such an inverted structure may feel to the user as if the button popped in or depressed by the applied touching force.
  • When using embedded coils, a force outward or inward can be applied in the region of one coil pair such as can be seen in FIGS. 3D and 3E. The material of region (4) may be relatively stiff (lightest gray) compared to the material in region (3), which enables a limited elastic popping in or out of the area of the inner coil. When applying a static current, the magnetic field with the strength B (μ×H) of the inner coil acts orthogonal to the magnetic field with another the strength B of the outer coil which causes a transversal force from one coil to the other. Since the inner coil area is movable within the elastically material's limits it becomes pressed outward counter wise the immobile outer coil. By applying an AC current a vibration can be generated. The coils may be printed to a foil in front (of a stiff display) or behind or integrated (in an elastic display) or onto a dedicated touch area. The coils may have several layers (FIG. 25) or may be integrated to a PCB (FIG. 26), such as like with Faltflex® produced by Mirth Elektronik GmbH & Co. KG, Germany. Alternatively to having a pair of coils there may be just one coil surrounding an area having a high permeable particles as like ferrites (μr:=4.15000) or diamagnetic particles as like Copper (0≦μr<1) both for resulting the inner area tending to escape the inducting magnetic field, applying a force outward. The particles may become printed sputtered onto or molded into the carrying (partially elastic) substrate.
  • When using EAPs or CNMs, the actuator pad may consist of actuator strings set up in a meshwork in which the strings are weaved into one another in a primarily two dimensional extension. FIG. 3H shows an example according to the present invention that has a touch screen actuator pad utilizing CNMs actuator strings set up in a staggered meshwork when no muscle actuation is controlled. As shown in FIG. 3H, the gray colored muscle strings B act horizontally and the black colored muscle strings A act vertically. The fixation points C of the muscle sections are where the muscle sections are fixed or attached at or to the screen.
  • To apply force at a specific spot, the strings (both the horizontal and the vertical strings) in a direct neighborhood or region are controlled in concert (see FIG. 3I). When actuated, the actuators shorten, in that way the fixation points become pulled towards one another. As shown in FIGS. 3I and 3J, when the muscles in region D are controlled to actuation, the screen bends outbound in that region. As best shown in FIG. 3J, the screen E bends outbound or outward generally orthogonal to the A-B plane in that region. A force F is applied in that orthogonal direction. In the schematic of FIG. 3J, the screen's surface is implied or shown as a grid E, but it would comprise a continuous surface in an actual embodiment, and the fixation points are shown at C, with the horizontal actuator strings B shown as dark gray and the vertical actuator strings A shown as light gray.
  • When actuated, the screen material flexes or pops out or protrudes (see FIG. 3J) or applies a force in the orthogonal direction to the actuator strings. When popping out is inhibited (such as, for example, when a finger of a user sets on that spot of the screen), the applied force, by a reaction (in force) by the screen, becomes receptivable or perceptible or discernible to the user. The actuator strings of the screen of the present invention are preferably staggered relative to one another, which enables the screen's outbounding force to be applied equally, such as is shown schematically in FIGS. 3H-J.
  • Therefore, the present invention provides a vehicle vision system that allows a user to manually select and control the display of a top down or surround view image or images to provide a desired view at the display of the vehicle, such as to assist the driver of the vehicle in reversing the vehicle or parking the vehicle or the like. The vision system includes a touch screen that is accessible and usable by the driver of the vehicle to adjust the displayed images (such as a virtual view point, virtual viewing angle, pan, zoom and/or the like) to provide a desired view to the driver of the vehicle. The vision system may provide information from other vehicle vision systems or other information sources, such as parking space information and the like, to assist the driver of the vehicle in finding an empty parking space and parking the vehicle in that space. Some of the information may be displayed or provided to the driver automatically. The vehicle vision system of the present invention thus provides enhanced display of information and images to the driver of the vehicle based on images captured by a plurality of cameras or image sensors of the vehicle and having exterior fields of view, such as forwardly, rearwardly and sidewardly of the vehicle.
  • The system includes an image processor operable to process image data captured by the camera or cameras, such as for detecting objects or other vehicles or pedestrians or the like in the field of view of one or more of the cameras. For example, the image processor may comprise an EyeQ2 or EyeQ3 image processing chip available from Mobileye Vision Technologies Ltd. of Jerusalem, Israel, and may include object detection software (such as the types described in U.S. Pat. Nos. 7,855,755; 7,720,580; and/or 7,038,577, which are hereby incorporated herein by reference in their entireties), and may analyze image data to detect vehicles and/or other objects. Responsive to such image processing, and when an object or other vehicle is detected, the system may generate an alert to the driver of the vehicle and/or may generate an overlay at the displayed image to highlight or enhance display of the detected object or vehicle, in order to enhance the driver's awareness of the detected object or vehicle or hazardous condition during a driving maneuver of the equipped vehicle.
  • The camera or imager or imaging sensor may comprise any suitable camera or imager or sensor. Optionally, the camera may comprise a “smart camera” that includes the imaging sensor array and associated circuitry and image processing circuitry and electrical connectors and the like as part of a camera module, such as by utilizing aspects of the vision systems described in PCT Application No. PCT/US2012/066571, filed Nov. 27, 2012 (Attorney Docket MAG04 FP-1961(PCT)), which is hereby incorporated herein by reference in its entirety.
  • The vehicle may include any type of sensor or sensors, such as imaging sensors or radar sensors or lidar sensors or ultrasonic sensors or the like. The imaging sensor or camera may capture image data for image processing and may comprise any suitable camera or sensing device, such as, for example, an array of a plurality of photosensor elements arranged in at least about 640 columns and 480 rows (at least about a 640×480 imaging array), with a respective lens focusing images onto respective portions of the array. The photosensor array may comprise a plurality of photosensor elements arranged in a photosensor array having rows and columns. The logic and control circuit of the imaging sensor may function in any known manner, and the image processing and algorithmic processing may comprise any suitable means for processing the images and/or image data. For example, the vision system and/or processing and/or camera and/or circuitry may utilize aspects described in U.S. Pat. Nos. 7,005,974; 5,760,962; 5,877,897; 5,796,094; 5,949,331; 6,222,447; 6,302,545; 6,396,397; 6,498,620; 6,523,964; 6,611,202; 6,201,642; 6,690,268; 6,717,610; 6,757,109; 6,802,617; 6,806,452; 6,822,563; 6,891,563; 6,946,978; 7,859,565; 5,550,677; 5,670,935; 6,636,258; 7,145,519; 7,161,616; 7,230,640; 7,248,283; 7,295,229; 7,301,466; 7,592,928; 7,881,496; 7,720,580; 7,038,577; 6,882,287; 5,929,786 and/or 5,786,772, PCT Application No. PCT/US2010/047256, filed Aug. 31, 2010 and published Mar. 10, 2011 as International Publication No. WO 2011/028686 and/or International Publication No. WO 2010/099416, published Sep. 2, 2010, and/or PCT Application No. PCT/US10/25545, filed Feb. 26, 2010 and published Sep. 2, 2010 as International Publication No. WO 2010/099416, and/or PCT Application No. PCT/US2012/048800, filed Jul. 30, 2012 (Attorney Docket MAG04 FP-1908(PCT)), and/or PCT Application No. PCT/US2012/048110, filed Jul. 25, 2012 (Attorney Docket MAG04 FP-1907(PCT)), and/or PCT Application No. PCT/CA2012/000378, filed Apr. 25, 2012 (Attorney Docket MAG04 FP-1819(PCT)), and/or PCT Application No. PCT/US2012/056014, filed Sep. 19, 2012 (Attorney Docket MAG04 FP-1937(PCT)), and/or PCT Application No. PCT/US12/57007, filed Sep. 25, 2012 (Attorney Docket MAG04 FP-1942(PCT)), and/or PCT Application No. PCT/US2012/061548, filed Oct. 24, 2012 (Attorney Docket MAG04 FP-1949(PCT)), and/or PCT Application No. PCT/US2012/062906, filed Nov. 1, 2012 (Attorney Docket MAG04 FP-1953(PCT)), and/or PCT Application No. PCT/US2012/063520, filed Nov. 5, 2012 (Attorney Docket MAG04 FP-1954(PCT)), and/or PCT Application No. PCT/US2012/064980, filed Nov. 14, 2012 (Attorney Docket MAG04 FP-1959(PCT)), and/or PCT Application No. PCT/US2012/066570, filed Nov. 27, 2012 (Attorney Docket MAG04 FP-1960(PCT)), and/or PCT Application No. PCT/US2012/066571, filed Nov. 27, 2012 (Attorney Docket MAG04 FP-1961(PCT)), and/or PCT Application No. PCT/US2012/068331, filed Dec. 7, 2012 (Attorney Docket MAG04 FP-1967(PCT)), and/or PCT Application No. PCT/US2012/071219, filed Dec. 21, 2012 (Attorney Docket MAG04 FP-1982(PCT)), and/or U.S. patent application Ser. No. 13/681,963, filed Nov. 20, 2012 (Attorney Docket MAG04 P-1983); Ser. No. 13/660,306, filed Oct. 25, 2012 (Attorney Docket MAG04 P-1950); Ser. No. 13/653,577, filed Oct. 17, 2012 (Attorney Docket MAG04 P-1948); and/or Ser. No. 13/534,657, filed Jun. 27, 2012 (Attorney Docket MAG04 P-1892), and/or U.S. provisional application Ser. No. 61/736,104, filed Dec. 12, 2012; Ser. No. 61/736,103, filed Dec. 12, 2012; Ser. No. 61/735,314, filed Dec. 10, 2012; Ser. No. 61/734,457, filed Dec. 7, 2012; Ser. No. 61/733,598, filed Dec. 5, 2012; Ser. No. 61/733,093, filed Dec. 4, 2012; Ser. No. 61/727,912, filed Nov. 19, 2012; Ser. No. 61/727,911, filed Nov. 19, 2012; Ser. No. 61/727,910, filed Nov. 19, 2012; Ser. No. 61/718,382, filed Oct. 25, 2012; Ser. No. 61/710,924, filed Oct. 8, 2012; Ser. No. 61/696,416, filed Sep. 4, 2012; Ser. No. 61/682,995, filed Aug. 14, 2012; Ser. No. 61/682,486, filed Aug. 13, 2012; Ser. No. 61/680,883, filed Aug. 8, 2012; Ser. No. 61/676,405, filed Jul. 27, 2012; Ser. No. 61/666,146, filed Jun. 29, 2012; Ser. No. 61/648,744, filed May 18, 2012; Ser. No. 61/624,507, filed Apr. 16, 2012; Ser. No. 61/616,126, filed Mar. 27, 2012; Ser. No. 61/615,410, filed Mar. 26, 2012; Ser. No. 61/613,651, filed Mar. 21, 2012; Ser. No. 61/607,229, filed Mar. 6, 2012; Ser. No. 61/602,876, filed Feb. 24, 2012; Ser. No. 61/600,205, filed Feb. 17, 2012, which are all hereby incorporated herein by reference in their entireties. The system may communicate with other communication systems via any suitable means, such as by utilizing aspects of the systems described in PCT Application No. PCT/US10/038,477, filed Jun. 14, 2010, and/or U.S. patent application Ser. No. 13/202,005, filed Aug. 17, 2011 (Attorney Docket MAG04 P-1595), which are hereby incorporated herein by reference in their entireties.
  • The imaging device and control and image processor and any associated illumination source, if applicable, may comprise any suitable components, and may utilize aspects of the cameras and vision systems described in U.S. Pat. Nos. 5,550,677; 5,877,897; 6,498,620; 5,670,935; 5,796,094; 6,396,397; 6,806,452; 6,690,268; 7,005,974; 7,123,168; 7,004,606; 6,946,978; 7,038,577; 6,353,392; 6,320,176; 6,313,454; and 6,824,281, and/or International Publication No. WO 2010/099416, published Sep. 2, 2010, and/or PCT Application No. PCT/US10/47256, filed Aug. 31, 2010 and published Mar. 10, 2011 as International Publication No. WO 2011/028686, and/or U.S. patent application Ser. No. 12/508,840, filed Jul. 24, 2009, and published Jan. 28, 2010 as U.S. Pat. Publication No. US 2010-0020170, and/or PCT Application No. PCT/US2012/048110, filed Jul. 25, 2012 (Attorney Docket MAG04 FP-1907(PCT)), and/or U.S. patent application Ser. No. 13/534,657, filed Jun. 27, 2012 (Attorney Docket MAG04 P-1892), which are all hereby incorporated herein by reference in their entireties. The camera or cameras may comprise any suitable cameras or imaging sensors or camera modules, and may utilize aspects of the cameras or sensors described in U.S. patent application Ser. No. 12/091,359, filed Apr. 24, 2008 and published Oct. 1, 2009 as U.S. Publication No. US-2009-0244361, and/or Ser. No. 13/260,400, filed Sep. 26, 2011 (Attorney Docket MAG04 P-1757), and/or U.S. Pat. Nos. 7,965,336 and/or 7,480,149, which are hereby incorporated herein by reference in their entireties. The imaging array sensor may comprise any suitable sensor, and may utilize various imaging sensors or imaging array sensors or cameras or the like, such as a CMOS imaging array sensor, a CCD sensor or other sensors or the like, such as the types described in U.S. Pat. Nos. 5,550,677; 5,670,935; 5,760,962; 5,715,093; 5,877,897; 6,922,292; 6,757,109; 6,717,610; 6,590,719; 6,201,642; 6,498,620; 5,796,094; 6,097,023; 6,320,176; 6,559,435; 6,831,261; 6,806,452; 6,396,397; 6,822,563; 6,946,978; 7,339,149; 7,038,577; 7,004,606; and/or 7,720,580, and/or U.S. patent application Ser. No. 10/534,632, filed May 11, 2005, now U.S. Pat. No. 7,965,336; and/or PCT Application No. PCT/US2008/076022, filed Sep. 11, 2008 and published Mar. 19, 2009 as International Publication No. WO/2009/036176, and/or PCT Application No. PCT/US2008/078700, filed Oct. 3, 2008 and published Apr. 9, 2009 as International Publication No. WO/2009/046268, which are all hereby incorporated herein by reference in their entireties.
  • The camera module and circuit chip or board and imaging sensor may be implemented and operated in connection with various vehicular vision-based systems, and/or may be operable utilizing the principles of such other vehicular systems, such as a vehicle headlamp control system, such as the type disclosed in U.S. Pat. Nos. 5,796,094; 6,097,023; 6,320,176; 6,559,435; 6,831,261; 7,004,606; 7,339,149; and/or 7,526,103, which are all hereby incorporated herein by reference in their entireties, a rain sensor, such as the types disclosed in commonly assigned U.S. Pat. Nos. 6,353,392; 6,313,454; 6,320,176; and/or 7,480,149, which are hereby incorporated herein by reference in their entireties, a vehicle vision system, such as a forwardly, sidewardly or rearwardly directed vehicle vision system utilizing principles disclosed in U.S. Pat. Nos. 5,550,677; 5,670,935; 5,760,962; 5,877,897; 5,949,331; 6,222,447; 6,302,545; 6,396,397; 6,498,620; 6,523,964; 6,611,202; 6,201,642; 6,690,268; 6,717,610; 6,757,109; 6,802,617; 6,806,452; 6,822,563; 6,891,563; 6,946,978; and/or 7,859,565, which are all hereby incorporated herein by reference in their entireties, a trailer hitching aid or tow check system, such as the type disclosed in U.S. Pat. No. 7,005,974, which is hereby incorporated herein by reference in its entirety, a reverse or sideward imaging system, such as for a lane change assistance system or lane departure warning system or for a blind spot or object detection system, such as imaging or detection systems of the types disclosed in U.S. Pat. Nos. 7,720,580; 7,038,577; 5,929,786 and/or 5,786,772, and/or U.S. patent application Ser. No. 11/239,980, filed Sep. 30, 2005, now U.S. Pat. No. 7,881,496, and/or U.S. provisional applications, Ser. No. 60/628,709, filed Nov. 17, 2004; Ser. No. 60/614,644, filed Sep. 30, 2004; Ser. No. 60/618,686, filed Oct. 14, 2004; Ser. No. 60/638,687, filed Dec. 23, 2004, which are hereby incorporated herein by reference in their entireties, a video device for internal cabin surveillance and/or video telephone function, such as disclosed in U.S. Pat. Nos. 5,760,962; 5,877,897; 6,690,268; and/or 7,370,983, and/or U.S. patent application Ser. No. 10/538,724, filed Jun. 13, 2005 and published Mar. 9, 2006 as U.S. Publication No. US-2006-0050018, which are hereby incorporated herein by reference in their entireties, a traffic sign recognition system, a system for determining a distance to a leading or trailing vehicle or object, such as a system utilizing the principles disclosed in U.S. Pat. Nos. 6,396,397 and/or 7,123,168, which are hereby incorporated herein by reference in their entireties, and/or the like.
  • Optionally, the circuit board or chip may include circuitry for the imaging array sensor and or other electronic accessories or features, such as by utilizing compass-on-a-chip or EC driver-on-a-chip technology and aspects such as described in U.S. Pat. No. 7,255,451 and/or U.S. Pat. No. 7,480,149; and/or U.S. patent application Ser. No. 11/226,628, filed Sep. 14, 2005 and published Mar. 23, 2006 as U.S. Publication No. US-2006-0061008, and/or Ser. No. 12/578,732, filed Oct. 14, 2009 (Attorney Docket DON01 P-1564), which are hereby incorporated herein by reference in their entireties.
  • Optionally, the vision system may include a display for displaying images captured by one or more of the imaging sensors for viewing by the driver of the vehicle while the driver is normally operating the vehicle. Optionally, for example, the vision system may include a video display device disposed at or in the interior rearview mirror assembly of the vehicle, such as by utilizing aspects of the video mirror display systems described in U.S. Pat. No. 6,690,268 and/or U.S. patent application Ser. No. 13/333,337, filed Dec. 21, 2011 (Attorney Docket DON01 P-1797), which are hereby incorporated herein by reference in their entireties. The video mirror display may comprise any suitable devices and systems and optionally may utilize aspects of the compass display systems described in U.S. Pat. Nos. 7,370,983; 7,329,013; 7,308,341; 7,289,037; 7,249,860; 7,004,593; 4,546,551; 5,699,044; 4,953,305; 5,576,687; 5,632,092; 5,677,851; 5,708,410; 5,737,226; 5,802,727; 5,878,370; 6,087,953; 6,173,508; 6,222,460; 6,513,252; and/or 6,642,851, and/or European patent application, published Oct. 11, 2000 under Publication No. EP 0 1043566, and/or U.S. patent application Ser. No. 11/226,628, filed Sep. 14, 2005 and published Mar. 23, 2006 as U.S. Publication No. US-2006-0061008, which are all hereby incorporated herein by reference in their entireties. Optionally, the video mirror display screen or device may be operable to display images captured by a rearward viewing camera of the vehicle during a reversing maneuver of the vehicle (such as responsive to the vehicle gear actuator being placed in a reverse gear position or the like) to assist the driver in backing up the vehicle, and optionally may be operable to display the compass heading or directional heading character or icon when the vehicle is not undertaking a reversing maneuver, such as when the vehicle is being driven in a forward direction along a road (such as by utilizing aspects of the display system described in PCT Application No. PCT/US2011/056295, filed Oct. 14, 2011 and published Apr. 19, 2012 as International Publication No. WO 2012/051500, which is hereby incorporated herein by reference in its entirety).
  • Optionally, the vision system (utilizing the forward facing camera and a rearward facing camera and other cameras disposed at the vehicle with exterior fields of view) may be part of or may provide a display of a top-down view or birds-eye view system of the vehicle or a surround view at the vehicle, such as by utilizing aspects of the vision systems described in PCT Application No. PCT/US10/25545, filed Feb. 26, 2010 and published on Sep. 2, 2010 as International Publication No. WO 2010/099416, and/or PCT Application No. PCT/US10/47256, filed Aug. 31, 2010 and published Mar. 10, 2011 as International Publication No. WO 2011/028686, and/or PCT Application No. PCT/US2011/062834, filed Dec. 1, 2011 and published Jun. 7, 2012 as International Publication No. WO2012/075250, and/or PCT Application No. PCT/US2012/048993, filed Jul. 31, 2012 (Attorney Docket MAG04 FP-1886(PCT)), and/or PCT Application No. PCT/US11/62755, filed Dec. 1, 2011 and published Jun. 7, 2012 as International Publication No. WO 2012-075250, and/or PCT Application No. PCT/CA2012/000378, filed Apr. 25, 2012 (Attorney Docket MAG04 FP-1819(PCT)), and/or PCT Application No. PCT/US2012/066571, filed Nov. 27, 2012 (Attorney Docket MAG04 FP-1961(PCT)), and/or U.S. patent application Ser. No. 13/333,337, filed Dec. 21, 2011 (Attorney Docket DON01 P-1797), and/or U.S. provisional applications, Ser. No. 61/615,410, filed Mar. 26, 2012, which are hereby incorporated herein by reference in their entireties.
  • Optionally, a video mirror display may be disposed rearward of and behind the reflective element assembly and may comprise a display such as the types disclosed in U.S. Pat. Nos. 5,530,240; 6,329,925; 7,855,755; 7,626,749; 7,581,859; 7,446,650; 7,370,983; 7,338,177; 7,274,501; 7,255,451; 7,195,381; 7,184,190; 5,668,663; 5,724,187 and/or 6,690,268, and/or in U.S. patent application Ser. No. 12/091,525, filed Apr. 25, 2008, now U.S. Pat. No. 7,855,755; Ser. No. 11/226,628, filed Sep. 14, 2005 and published Mar. 23, 2006 as U.S. Publication No. US-2006-0061008; and/or Ser. No. 10/538,724, filed Jun. 13, 2005 and published Mar. 9, 2006 as U.S. Publication No. US-2006-0050018, which are all hereby incorporated herein by reference in their entireties. The display is viewable through the reflective element when the display is activated to display information. The display element may be any type of display element, such as a vacuum fluorescent (VF) display element, a light emitting diode (LED) display element, such as an organic light emitting diode (OLED) or an inorganic light emitting diode, an electroluminescent (EL) display element, a liquid crystal display (LCD) element, a video screen display element or backlit thin film transistor (TFT) display element or the like, and may be operable to display various information (as discrete characters, icons or the like, or in a multi-pixel manner) to the driver of the vehicle, such as passenger side inflatable restraint (PSIR) information, tire pressure status, and/or the like. The mirror assembly and/or display may utilize aspects described in U.S. Pat. Nos. 7,184,190; 7,255,451; 7,446,924 and/or 7,338,177, which are all hereby incorporated herein by reference in their entireties. The thicknesses and materials of the coatings on the substrates of the reflective element may be selected to provide a desired color or tint to the mirror reflective element, such as a blue colored reflector, such as is known in the art and such as described in U.S. Pat. Nos. 5,910,854; 6,420,036; and/or 7,274,501, which are hereby incorporated herein by reference in their entireties.
  • Optionally, the display or displays and any associated user inputs may be associated with various accessories or systems, such as, for example, a tire pressure monitoring system or a passenger air bag status or a garage door opening system or a telematics system or any other accessory or system of the mirror assembly or of the vehicle or of an accessory module or console of the vehicle, such as an accessory module or console of the types described in U.S. Pat. Nos. 7,289,037; 6,877,888; 6,824,281; 6,690,268; 6,672,744; 6,386,742; and 6,124,886, and/or U.S. patent application Ser. No. 10/538,724, filed Jun. 13, 2005 and published Mar. 9, 2006 as U.S. Publication No. US-2006-0050018, which are hereby incorporated herein by reference in their entireties.
  • Changes and modifications to the specifically described embodiments may be carried out without departing from the principles of the present invention, which is intended to be limited only by the scope of the appended claims as interpreted according to the principles of patent law.

Claims (21)

1: A vehicle vision system comprising:
a plurality of cameras disposed at a vehicle equipped with said vision system and having respective exterior fields of view, said plurality of cameras comprising a forward viewing camera at a front portion of the equipped vehicle, a rearward viewing camera at a rear portion of the equipped vehicle, a driver side sideward viewing camera at a driver side of the equipped vehicle and a passenger side sideward viewing camera at a passenger side of the equipped vehicle;
a display screen for displaying images derived from image data captured by said cameras in a surround view format where captured image data is merged to provide a single composite display image representative of a view from a virtual viewing position;
a control operable to adjust at least one of (i) a virtual viewing location of the displayed composite image, (ii) a virtual viewing angle of the displayed composite image, (iii) a degree of zoom of the displayed composite image and (iv) a degree of panning of the displayed composite image;
a gesture sensing device operable to sense a gesture made by a driver of the equipped vehicle; and
wherein said control is responsive to sensing by said gesture sensing device of a gesture made by the driver, and wherein, responsive at least in part to a determined gesture made by the driver, said control adjusts said at least one of (i) a virtual viewing location of the displayed composite image, (ii) a virtual viewing angle of the displayed composite image, (iii) a degree of zoom of the displayed composite image and (iv) a degree of panning of the displayed composite image.
2: The vehicle vision system of claim 1, wherein said control is operable to discern a gesture made by a hand of the driver of the equipped vehicle.
3: The vehicle vision system of claim 1, wherein said gesture sensing device is operable to determine a gesture made by the driver at a touch input and wherein said control is operable to discern a type of touch made by a hand of the driver of the equipped vehicle.
4: The vehicle vision system of claim 1, wherein said control is operable to discern a type of gesture made by a hand of the driver of the equipped vehicle, and wherein the type of gesture comprises at least one of (a) a tapping made by at least one finger of the hand of the driver and (b) a sliding motion made by at least one finger of the hand of the driver.
5. (canceled)
6: The vehicle vision system of claim 1, wherein said control is operable to discern a gesture movement made by fingers of a hand of the driver at or proximate a touch screen of said gesture sensing device.
7: The vehicle vision system of claim 6, wherein said control is operable to discern gesture movements made by two or more fingers of the hand of the driver at or proximate said touch screen.
8: The vehicle vision system of claim 1, wherein said control is operable to discern a gesture made by two or more fingers of a hand of the driver.
9: The vehicle vision system of claim 1, wherein said control is operable to adjust the displayed images responsive to detection of one or more fingers of a hand of the driver touching and moving at a touch screen of said gesture sensing device.
10: The vehicle vision system of claim 1, wherein said gesture sensing device comprises at least one of (i) a time of flight sensor, (ii) at least one camera having a field of view interior of the equipped vehicle, (iii) a single camera having a field of view interior of the equipped vehicle and comprising motion disparity detection, and (iv) two cameras having fields of view interior of the equipped vehicle and comprising stereo camera disparity detection.
11: The vehicle vision system of claim 1, wherein said control comprises an image processor and wherein, responsive to image processing of captured image data by said image processor, said vision system is operable to highlight displayed images of one or more hazards detected in the field of view of at least one of said cameras.
12: The vehicle vision system of claim 1, wherein, responsive to a determination of a head movement made by the driver of the equipped vehicle, said control is operable to adjust a virtual viewing location of the displayed image.
13: A vehicle vision system comprising:
a plurality of cameras disposed at a vehicle equipped with said vision system and having respective exterior fields of view, said plurality of cameras comprising a forward viewing camera at a front portion of the equipped vehicle, a rearward viewing camera at a rear portion of the equipped vehicle, a driver side sideward viewing camera at a driver side of the equipped vehicle and a passenger side sideward viewing camera at a passenger side of the equipped vehicle;
a display screen for displaying images derived from image data captured by said cameras in a surround view format where captured image data is merged to provide a single composite display image representative of a view from a virtual viewing position;
a control operable to adjust at least one of (i) a virtual viewing location of the displayed composite image, (ii) a virtual viewing angle of the displayed composite image, (iii) a degree of zoom of the displayed composite image and (iv) a degree of panning of the displayed composite image;
a gesture sensing device operable to sense a gesture made by a driver of the equipped vehicle, wherein said gesture sensing device comprises an interior camera having a field of view interior the equipped vehicle that encompasses at least a portion of an area typically occupied by a driver of the equipped vehicle; and
wherein said control is responsive to sensing by said gesture sensing device of a gesture made by the driver, and wherein, responsive at least in part to a determined gesture made by the driver, said control adjusts said at least one of (i) a virtual viewing location of the displayed composite image, (ii) a virtual viewing angle of the displayed composite image, (iii) a degree of zoom of the displayed composite image and (iv) a degree of panning of the displayed composite image.
14: The vehicle vision system of claim 13, wherein said control is operable to discern a gesture made by a hand of the driver of the equipped vehicle.
15: The vehicle vision system of claim 13, wherein said control is operable to discern a type of gesture made by a hand of the driver of the equipped vehicle, and wherein the type of gesture comprises at least one of (a) a tapping made by at least one finger of a hand of the driver and (b) a sliding motion made by at least one finger of a hand of the driver.
16: The vehicle vision system of claim 13, wherein said control is operable to discern a gesture made by two or more fingers of a hand of the driver.
17: The vehicle vision system of claim 13, wherein said control comprises an image processor and wherein, responsive to image processing of captured image data by said image processor, said vision system is operable to highlight displayed images of one or more hazards detected in the field of view of at least one of said cameras.
18: A vehicle vision system comprising:
a plurality of cameras disposed at a vehicle equipped with said vision system and having respective exterior fields of view, said plurality of cameras comprising at least a rearward viewing camera at a rear portion of the equipped vehicle, a driver side sideward viewing camera at a driver side of the equipped vehicle and a passenger side sideward viewing camera at a passenger side of the equipped vehicle;
a display screen for displaying images derived from image data captured by said cameras in a surround view format where captured image data is merged to provide a single composite display image representative of a view from a virtual viewing position;
a control operable to adjust at least one of (i) a virtual viewing location of the displayed composite image, (ii) a virtual viewing angle of the displayed composite image, (iii) a degree of zoom of the displayed composite image and (iv) a degree of panning of the displayed composite image;
a gesture sensing device operable to sense a gesture made by a hand of a driver of the equipped vehicle, wherein said gesture sensing device comprises an interior camera having a field of view interior the equipped vehicle that encompasses at least a portion of an area typically occupied by a driver of the equipped vehicle;
wherein said control is responsive to sensing of a driver's hand gesture by said gesture sensing device, and wherein, responsive at least in part to a determined hand gesture by the driver, said control adjusts said at least one of (i) a virtual viewing location of the displayed composite image, (ii) a virtual viewing angle of the displayed composite image, (iii) a degree of zoom of the displayed composite image and (iv) a degree of panning of the displayed composite image; and
wherein, responsive to a determination of a head movement made by the driver of the equipped vehicle, said control is operable to adjust a virtual viewing location of the displayed image.
19: The vehicle vision system of claim 18, wherein said control is operable to discern a driver's hand gesture made by two or more fingers of the hand of the driver.
20: The vehicle vision system of claim 18, wherein, responsive to image processing of captured image data, said vision system is operable to highlight displayed images of one or more hazards detected in the field of view of at least one of said cameras.
21: The vehicle vision system of claim 18, wherein, responsive to a determination of a head movement made by the driver of the equipped vehicle that is indicative of the driver looking upward or downward, said control is operable to vertically adjust the virtual viewing location of the displayed image.
US14/372,524 2012-01-20 2013-01-18 Vehicle vision system with positionable virtual viewpoint Abandoned US20150022664A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US201261588833P true 2012-01-20 2012-01-20
US201261602878P true 2012-02-24 2012-02-24
US201261678375P true 2012-08-01 2012-08-01
US14/372,524 US20150022664A1 (en) 2012-01-20 2013-01-18 Vehicle vision system with positionable virtual viewpoint
PCT/US2013/022119 WO2013109869A1 (en) 2012-01-20 2013-01-18 Vehicle vision system with free positional virtual panoramic view

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/372,524 US20150022664A1 (en) 2012-01-20 2013-01-18 Vehicle vision system with positionable virtual viewpoint

Publications (1)

Publication Number Publication Date
US20150022664A1 true US20150022664A1 (en) 2015-01-22

Family

ID=48799680

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/372,524 Abandoned US20150022664A1 (en) 2012-01-20 2013-01-18 Vehicle vision system with positionable virtual viewpoint
US16/392,036 Pending US20190253672A1 (en) 2012-01-20 2019-04-23 Vehicular vision system with split display

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/392,036 Pending US20190253672A1 (en) 2012-01-20 2019-04-23 Vehicular vision system with split display

Country Status (2)

Country Link
US (2) US20150022664A1 (en)
WO (1) WO2013109869A1 (en)

Cited By (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120262580A1 (en) * 2011-04-14 2012-10-18 Klaus Huebner Vehicle Surround View System
US20150078624A1 (en) * 2012-03-30 2015-03-19 Panasonic Corporation Parking assistance device and parking assistance method
US20150185858A1 (en) * 2013-12-26 2015-07-02 Wes A. Nagara System and method of plane field activation for a gesture-based control system
US20150338922A1 (en) * 2014-05-21 2015-11-26 Denso Corporation Gesture input apparatus
US20150375680A1 (en) * 2013-02-28 2015-12-31 Aisin Seiki Kabushiki Kaisha Vehicle control apparatus and program technical field
US9280202B2 (en) 2013-05-10 2016-03-08 Magna Electronics Inc. Vehicle vision system
US20160090039A1 (en) * 2014-09-25 2016-03-31 Nissan North America, Inc. Method and system of communicating vehicle information
US20160132106A1 (en) * 2014-11-06 2016-05-12 Hyundai Motor Company Menu selection apparatus using gaze tracking
US20160182863A1 (en) * 2014-12-19 2016-06-23 Aisin Seiki Kabushiki Kaisha Vehicle circumference monitoring apparatus
US20160209647A1 (en) * 2015-01-19 2016-07-21 Magna Electronics Inc. Vehicle vision system with light field monitor
US9405120B2 (en) 2014-11-19 2016-08-02 Magna Electronics Solutions Gmbh Head-up display and vehicle using the same
US9403277B2 (en) 2014-04-10 2016-08-02 Smartvue Corporation Systems and methods for automated cloud-based analytics for security and/or surveillance
US9407879B2 (en) 2014-04-10 2016-08-02 Smartvue Corporation Systems and methods for automated cloud-based analytics and 3-dimensional (3D) playback for surveillance systems
US9407880B2 (en) 2014-04-10 2016-08-02 Smartvue Corporation Systems and methods for automated 3-dimensional (3D) cloud-based analytics for security surveillance in operation areas
US9405979B2 (en) 2014-04-10 2016-08-02 Smartvue Corporation Systems and methods for automated cloud-based analytics and 3-dimensional (3D) display for surveillance systems
US9407881B2 (en) 2014-04-10 2016-08-02 Smartvue Corporation Systems and methods for automated cloud-based analytics for surveillance systems with unmanned aerial devices
US9420238B2 (en) 2014-04-10 2016-08-16 Smartvue Corporation Systems and methods for automated cloud-based 3-dimensional (3D) analytics for surveillance systems
US9426428B2 (en) 2014-04-10 2016-08-23 Smartvue Corporation Systems and methods for automated cloud-based analytics and 3-dimensional (3D) display for surveillance systems in retail stores
US9547373B2 (en) * 2015-03-16 2017-01-17 Thunder Power Hong Kong Ltd. Vehicle operating system using motion capture
US20170024624A1 (en) * 2015-07-22 2017-01-26 Robert Bosch Gmbh Method and device for predicting a line of vision of a vehicle occupant
WO2017033113A1 (en) 2015-08-21 2017-03-02 Acerta Pharma B.V. Therapeutic combinations of a mek inhibitor and a btk inhibitor
US9674490B2 (en) 2013-04-18 2017-06-06 Magna Electronics Inc. Vision system for vehicle with adjustable cameras
US9686514B2 (en) 2014-04-10 2017-06-20 Kip Smrt P1 Lp Systems and methods for an automated cloud-based video surveillance system
US20170217367A1 (en) * 2016-02-01 2017-08-03 Magna Electronics Inc. Vehicle adaptive lighting system
US9762880B2 (en) 2011-12-09 2017-09-12 Magna Electronics Inc. Vehicle vision system with customized display
US20170286763A1 (en) * 2015-02-04 2017-10-05 Hitachi Construction Machinery Co., Ltd. Vehicle exterior moving object detection system
US9800794B2 (en) 2013-06-03 2017-10-24 Magna Electronics Inc. Vehicle vision system with enhanced low light capabilities
WO2017182841A1 (en) * 2016-04-20 2017-10-26 Continental Automotive Gmbh Facial movement and gesture sensing side-view mirror
US9815409B2 (en) 2013-05-09 2017-11-14 Magna Mirrors Of America, Inc. Rearview vision system for vehicle
US9827901B1 (en) * 2016-05-26 2017-11-28 Dura Operating, Llc System and method for dynamically projecting information from a motor vehicle
US9834153B2 (en) 2011-04-25 2017-12-05 Magna Electronics Inc. Method and system for dynamically calibrating vehicular cameras
US9855817B2 (en) * 2015-03-16 2018-01-02 Thunder Power New Energy Vehicle Development Company Limited Vehicle operating system using motion capture
US9954260B2 (en) 2015-03-16 2018-04-24 Thunder Power New Energy Vehicle Development Company Limited Battery system with heat exchange device
US10017114B2 (en) 2014-02-19 2018-07-10 Magna Electronics Inc. Vehicle vision system with display
US10078789B2 (en) 2015-07-17 2018-09-18 Magna Electronics Inc. Vehicle parking assist system with vision-based parking space detection
US10084995B2 (en) 2014-04-10 2018-09-25 Sensormatic Electronics, LLC Systems and methods for an automated cloud-based video surveillance system
US10093284B2 (en) 2015-03-16 2018-10-09 Thunder Power New Energy Vehicle Development Company Limited Vehicle camera cleaning system
US10127463B2 (en) 2014-11-21 2018-11-13 Magna Electronics Inc. Vehicle vision system with multiple cameras
US10126928B2 (en) 2014-03-31 2018-11-13 Magna Electronics Inc. Vehicle human machine interface with auto-customization
US10160437B2 (en) 2016-02-29 2018-12-25 Magna Electronics Inc. Vehicle control system with reverse assist
US10166924B2 (en) 2016-05-11 2019-01-01 Magna Mirrors Of America, Inc. Vehicle vision system with display by a mirror
US10173687B2 (en) 2015-03-16 2019-01-08 Wellen Sham Method for recognizing vehicle driver and determining whether driver can start vehicle
US10217003B2 (en) 2014-04-10 2019-02-26 Sensormatic Electronics, LLC Systems and methods for automated analytics for security surveillance in operation areas
US10214206B2 (en) 2015-07-13 2019-02-26 Magna Electronics Inc. Parking assist system for vehicle
US10313584B2 (en) * 2017-01-04 2019-06-04 Texas Instruments Incorporated Rear-stitched view panorama for rear-view visualization
US10324297B2 (en) 2015-11-30 2019-06-18 Magna Electronics Inc. Heads up display system for vehicle
US10328932B2 (en) 2014-06-02 2019-06-25 Magna Electronics Inc. Parking assist system with annotated map generation
GB2570629A (en) * 2017-11-17 2019-08-07 Jaguar Land Rover Ltd Vehicle controller
US10401621B2 (en) 2016-04-19 2019-09-03 Magna Electronics Inc. Display unit for vehicle head-up display system
US10462354B2 (en) * 2016-12-09 2019-10-29 Magna Electronics Inc. Vehicle control system utilizing multi-camera module
US10525787B2 (en) 2015-03-16 2020-01-07 Thunder Power New Energy Vehicle Development Company Limited Electric vehicle thermal management system with series and parallel structure
US10547815B2 (en) * 2017-03-02 2020-01-28 JVC Kenwood Corporation Bird's-eye view video generation device, bird's-eye view video generation method and non-transitory storage medium
US10567649B2 (en) 2017-07-31 2020-02-18 Facebook, Inc. Parallax viewer system for 3D content
US10587818B2 (en) 2016-08-02 2020-03-10 Magna Electronics Inc. Vehicle vision system with enhanced camera brightness control
US10594985B2 (en) 2018-08-20 2020-03-17 Sensormatic Electronics, LLC Systems and methods for automated cloud-based analytics for security and/or surveillance

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9743002B2 (en) 2012-11-19 2017-08-22 Magna Electronics Inc. Vehicle vision system with enhanced display functions
US9068390B2 (en) 2013-01-21 2015-06-30 Magna Electronics Inc. Vehicle hatch control system
US9688200B2 (en) 2013-03-04 2017-06-27 Magna Electronics Inc. Calibration system and method for multi-camera vision system
US9875019B2 (en) 2013-12-26 2018-01-23 Visteon Global Technologies, Inc. Indicating a transition from gesture based inputs to touch surfaces
US10525883B2 (en) 2014-06-13 2020-01-07 Magna Electronics Inc. Vehicle vision system with panoramic view
US10286855B2 (en) 2015-03-23 2019-05-14 Magna Electronics Inc. Vehicle vision system with video compression
CN105898460A (en) * 2015-12-10 2016-08-24 乐视网信息技术(北京)股份有限公司 Method and device for adjusting panorama video play visual angle of intelligent TV
DE102016206308B4 (en) 2016-04-14 2018-06-07 Volkswagen Aktiengesellschaft Camera device of a motor vehicle
WO2017191558A1 (en) 2016-05-02 2017-11-09 Magna Mirrors Of America, Inc. Caseless rearview mirror assembly

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090273563A1 (en) * 1999-11-08 2009-11-05 Pryor Timothy R Programmable tactile touch screen displays and man-machine interfaces for improved vehicle instrumentation and telematics
WO2010000001A1 (en) * 2008-07-02 2010-01-07 Karen Gasparyan Bar clamping and tightening tool
WO2010144900A1 (en) * 2009-06-12 2010-12-16 Magna Electronics Inc. Scalable integrated electronic control unit for vehicle
US20110012830A1 (en) * 2009-07-20 2011-01-20 J Touch Corporation Stereo image interaction system
US20110090149A1 (en) * 2003-09-15 2011-04-21 Sony Computer Entertainment Inc. Method and apparatus for adjusting a view of a scene being displayed according to tracked head motion
US20130038732A1 (en) * 2011-08-09 2013-02-14 Continental Automotive Systems, Inc. Field of view matching video display system
US20140285666A1 (en) * 2011-11-01 2014-09-25 Magna Mirrors Of America, Inc. Vision system with door mounted exterior mirror and display

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5670935A (en) * 1993-02-26 1997-09-23 Donnelly Corporation Rearview vision system for vehicle including panoramic view
EP1263626A2 (en) * 2000-03-02 2002-12-11 Donnelly Corporation Video mirror systems incorporating an accessory module
US7295904B2 (en) * 2004-08-31 2007-11-13 International Business Machines Corporation Touch gesture based interface for motor vehicle
US7952564B2 (en) * 2005-02-17 2011-05-31 Hurst G Samuel Multiple-touch sensor
US8154529B2 (en) * 2009-05-14 2012-04-10 Atmel Corporation Two-dimensional touch sensors
US8907929B2 (en) * 2010-06-29 2014-12-09 Qualcomm Incorporated Touchless sensing and gesture recognition using continuous wave ultrasound signals

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090273563A1 (en) * 1999-11-08 2009-11-05 Pryor Timothy R Programmable tactile touch screen displays and man-machine interfaces for improved vehicle instrumentation and telematics
US20110090149A1 (en) * 2003-09-15 2011-04-21 Sony Computer Entertainment Inc. Method and apparatus for adjusting a view of a scene being displayed according to tracked head motion
WO2010000001A1 (en) * 2008-07-02 2010-01-07 Karen Gasparyan Bar clamping and tightening tool
WO2010144900A1 (en) * 2009-06-12 2010-12-16 Magna Electronics Inc. Scalable integrated electronic control unit for vehicle
US20120218412A1 (en) * 2009-06-12 2012-08-30 Magna Electronics Inc. Scalable integrated electronic control unit for vehicle
US20110012830A1 (en) * 2009-07-20 2011-01-20 J Touch Corporation Stereo image interaction system
US20130038732A1 (en) * 2011-08-09 2013-02-14 Continental Automotive Systems, Inc. Field of view matching video display system
US20140285666A1 (en) * 2011-11-01 2014-09-25 Magna Mirrors Of America, Inc. Vision system with door mounted exterior mirror and display

Cited By (81)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120262580A1 (en) * 2011-04-14 2012-10-18 Klaus Huebner Vehicle Surround View System
US9679359B2 (en) * 2011-04-14 2017-06-13 Harman Becker Automotive Systems Gmbh Vehicle surround view system
US9834153B2 (en) 2011-04-25 2017-12-05 Magna Electronics Inc. Method and system for dynamically calibrating vehicular cameras
US10129518B2 (en) 2011-12-09 2018-11-13 Magna Electronics Inc. Vehicle vision system with customized display
US9762880B2 (en) 2011-12-09 2017-09-12 Magna Electronics Inc. Vehicle vision system with customized display
US10542244B2 (en) 2011-12-09 2020-01-21 Magna Electronics Inc. Vehicle vision system with customized display
US9547796B2 (en) * 2012-03-30 2017-01-17 Panasonic Intellectual Property Management Co., Ltd. Parking assistance device and parking assistance method
US20150078624A1 (en) * 2012-03-30 2015-03-19 Panasonic Corporation Parking assistance device and parking assistance method
US20150375680A1 (en) * 2013-02-28 2015-12-31 Aisin Seiki Kabushiki Kaisha Vehicle control apparatus and program technical field
US10322672B2 (en) * 2013-02-28 2019-06-18 Aisin Seiki Kabushiki Kaisha Vehicle control apparatus and program with rotational control of captured image data
US10218940B2 (en) 2013-04-18 2019-02-26 Magna Electronics Inc. Vision system for vehicle with adjustable camera
US9674490B2 (en) 2013-04-18 2017-06-06 Magna Electronics Inc. Vision system for vehicle with adjustable cameras
US9815409B2 (en) 2013-05-09 2017-11-14 Magna Mirrors Of America, Inc. Rearview vision system for vehicle
US10384610B2 (en) 2013-05-09 2019-08-20 Magna Mirrors Of America, Inc. Rearview vision system for vehicle
US9738224B2 (en) 2013-05-10 2017-08-22 Magna Electronics Inc. Vehicle vision system
US10286843B2 (en) 2013-05-10 2019-05-14 Magna Electronics Inc. Vehicle vision system
US9280202B2 (en) 2013-05-10 2016-03-08 Magna Electronics Inc. Vehicle vision system
US9800794B2 (en) 2013-06-03 2017-10-24 Magna Electronics Inc. Vehicle vision system with enhanced low light capabilities
US10063786B2 (en) 2013-06-03 2018-08-28 Magna Electronics Inc. Vehicle vision system with enhanced low light capabilities
US9912876B1 (en) 2013-06-03 2018-03-06 Magna Electronics Inc. Vehicle vision system with enhanced low light capabilities
US20150185858A1 (en) * 2013-12-26 2015-07-02 Wes A. Nagara System and method of plane field activation for a gesture-based control system
US10017114B2 (en) 2014-02-19 2018-07-10 Magna Electronics Inc. Vehicle vision system with display
US10315573B2 (en) 2014-02-19 2019-06-11 Magna Electronics Inc. Method for displaying information to vehicle driver
US10126928B2 (en) 2014-03-31 2018-11-13 Magna Electronics Inc. Vehicle human machine interface with auto-customization
US10084995B2 (en) 2014-04-10 2018-09-25 Sensormatic Electronics, LLC Systems and methods for an automated cloud-based video surveillance system
US9407881B2 (en) 2014-04-10 2016-08-02 Smartvue Corporation Systems and methods for automated cloud-based analytics for surveillance systems with unmanned aerial devices
US9686514B2 (en) 2014-04-10 2017-06-20 Kip Smrt P1 Lp Systems and methods for an automated cloud-based video surveillance system
US9420238B2 (en) 2014-04-10 2016-08-16 Smartvue Corporation Systems and methods for automated cloud-based 3-dimensional (3D) analytics for surveillance systems
US9426428B2 (en) 2014-04-10 2016-08-23 Smartvue Corporation Systems and methods for automated cloud-based analytics and 3-dimensional (3D) display for surveillance systems in retail stores
US9438865B2 (en) 2014-04-10 2016-09-06 Smartvue Corporation Systems and methods for automated cloud-based analytics for security surveillance systems with mobile input capture devices
US9405979B2 (en) 2014-04-10 2016-08-02 Smartvue Corporation Systems and methods for automated cloud-based analytics and 3-dimensional (3D) display for surveillance systems
US9407880B2 (en) 2014-04-10 2016-08-02 Smartvue Corporation Systems and methods for automated 3-dimensional (3D) cloud-based analytics for security surveillance in operation areas
US9407879B2 (en) 2014-04-10 2016-08-02 Smartvue Corporation Systems and methods for automated cloud-based analytics and 3-dimensional (3D) playback for surveillance systems
US9403277B2 (en) 2014-04-10 2016-08-02 Smartvue Corporation Systems and methods for automated cloud-based analytics for security and/or surveillance
US10057546B2 (en) 2014-04-10 2018-08-21 Sensormatic Electronics, LLC Systems and methods for automated cloud-based analytics for security and/or surveillance
US10217003B2 (en) 2014-04-10 2019-02-26 Sensormatic Electronics, LLC Systems and methods for automated analytics for security surveillance in operation areas
US9448641B2 (en) * 2014-05-21 2016-09-20 Denso Corporation Gesture input apparatus
US20150338922A1 (en) * 2014-05-21 2015-11-26 Denso Corporation Gesture input apparatus
US10328932B2 (en) 2014-06-02 2019-06-25 Magna Electronics Inc. Parking assist system with annotated map generation
US20160090039A1 (en) * 2014-09-25 2016-03-31 Nissan North America, Inc. Method and system of communicating vehicle information
US9734412B2 (en) * 2014-09-25 2017-08-15 Nissan North America, Inc. Method and system of communicating vehicle information
US20160132106A1 (en) * 2014-11-06 2016-05-12 Hyundai Motor Company Menu selection apparatus using gaze tracking
US9696800B2 (en) * 2014-11-06 2017-07-04 Hyundai Motor Company Menu selection apparatus using gaze tracking
US9405120B2 (en) 2014-11-19 2016-08-02 Magna Electronics Solutions Gmbh Head-up display and vehicle using the same
US10354155B2 (en) 2014-11-21 2019-07-16 Manga Electronics Inc. Vehicle vision system with multiple cameras
US10127463B2 (en) 2014-11-21 2018-11-13 Magna Electronics Inc. Vehicle vision system with multiple cameras
US20160182863A1 (en) * 2014-12-19 2016-06-23 Aisin Seiki Kabushiki Kaisha Vehicle circumference monitoring apparatus
US9973734B2 (en) * 2014-12-19 2018-05-15 Aisin Seiki Kabushiki Kaisha Vehicle circumference monitoring apparatus
US20160209647A1 (en) * 2015-01-19 2016-07-21 Magna Electronics Inc. Vehicle vision system with light field monitor
US10247941B2 (en) * 2015-01-19 2019-04-02 Magna Electronics Inc. Vehicle vision system with light field monitor
US20170286763A1 (en) * 2015-02-04 2017-10-05 Hitachi Construction Machinery Co., Ltd. Vehicle exterior moving object detection system
US9990543B2 (en) * 2015-02-04 2018-06-05 Hitachi Construction Machinery Co., Ltd. Vehicle exterior moving object detection system
US10525787B2 (en) 2015-03-16 2020-01-07 Thunder Power New Energy Vehicle Development Company Limited Electric vehicle thermal management system with series and parallel structure
US10124648B2 (en) 2015-03-16 2018-11-13 Thunder Power New Energy Vehicle Development Company Limited Vehicle operating system using motion capture
US9855817B2 (en) * 2015-03-16 2018-01-02 Thunder Power New Energy Vehicle Development Company Limited Vehicle operating system using motion capture
US9954260B2 (en) 2015-03-16 2018-04-24 Thunder Power New Energy Vehicle Development Company Limited Battery system with heat exchange device
US10173687B2 (en) 2015-03-16 2019-01-08 Wellen Sham Method for recognizing vehicle driver and determining whether driver can start vehicle
US10479327B2 (en) 2015-03-16 2019-11-19 Thunder Power New Energy Vehicle Development Company Limited Vehicle camera cleaning system
US10281989B2 (en) 2015-03-16 2019-05-07 Thunder Power New Energy Vehicle Development Company Limited Vehicle operating system using motion capture
US10093284B2 (en) 2015-03-16 2018-10-09 Thunder Power New Energy Vehicle Development Company Limited Vehicle camera cleaning system
US9547373B2 (en) * 2015-03-16 2017-01-17 Thunder Power Hong Kong Ltd. Vehicle operating system using motion capture
US10214206B2 (en) 2015-07-13 2019-02-26 Magna Electronics Inc. Parking assist system for vehicle
US10078789B2 (en) 2015-07-17 2018-09-18 Magna Electronics Inc. Vehicle parking assist system with vision-based parking space detection
US20170024624A1 (en) * 2015-07-22 2017-01-26 Robert Bosch Gmbh Method and device for predicting a line of vision of a vehicle occupant
US10074023B2 (en) * 2015-07-22 2018-09-11 Robert Bosch Gmbh Method and device for predicting a line of vision of a vehicle occupant
WO2017033113A1 (en) 2015-08-21 2017-03-02 Acerta Pharma B.V. Therapeutic combinations of a mek inhibitor and a btk inhibitor
US10324297B2 (en) 2015-11-30 2019-06-18 Magna Electronics Inc. Heads up display system for vehicle
US20170217367A1 (en) * 2016-02-01 2017-08-03 Magna Electronics Inc. Vehicle adaptive lighting system
US10160437B2 (en) 2016-02-29 2018-12-25 Magna Electronics Inc. Vehicle control system with reverse assist
US10401621B2 (en) 2016-04-19 2019-09-03 Magna Electronics Inc. Display unit for vehicle head-up display system
WO2017182841A1 (en) * 2016-04-20 2017-10-26 Continental Automotive Gmbh Facial movement and gesture sensing side-view mirror
US10166924B2 (en) 2016-05-11 2019-01-01 Magna Mirrors Of America, Inc. Vehicle vision system with display by a mirror
US9827901B1 (en) * 2016-05-26 2017-11-28 Dura Operating, Llc System and method for dynamically projecting information from a motor vehicle
US10587818B2 (en) 2016-08-02 2020-03-10 Magna Electronics Inc. Vehicle vision system with enhanced camera brightness control
US10462354B2 (en) * 2016-12-09 2019-10-29 Magna Electronics Inc. Vehicle control system utilizing multi-camera module
US10313584B2 (en) * 2017-01-04 2019-06-04 Texas Instruments Incorporated Rear-stitched view panorama for rear-view visualization
US10547815B2 (en) * 2017-03-02 2020-01-28 JVC Kenwood Corporation Bird's-eye view video generation device, bird's-eye view video generation method and non-transitory storage medium
US10589676B2 (en) 2017-06-01 2020-03-17 Magna Electronics Inc. Vehicle display system with user input display
US10567649B2 (en) 2017-07-31 2020-02-18 Facebook, Inc. Parallax viewer system for 3D content
GB2570629A (en) * 2017-11-17 2019-08-07 Jaguar Land Rover Ltd Vehicle controller
US10594985B2 (en) 2018-08-20 2020-03-17 Sensormatic Electronics, LLC Systems and methods for automated cloud-based analytics for security and/or surveillance

Also Published As

Publication number Publication date
US20190253672A1 (en) 2019-08-15
WO2013109869A1 (en) 2013-07-25

Similar Documents

Publication Publication Date Title
US10542244B2 (en) Vehicle vision system with customized display
US9919650B2 (en) Vehicle peripheral observation device
US10029621B2 (en) Rear view camera system using rear view mirror location
US9862319B2 (en) Vehicle peripheral observation device using cameras and an emphasized frame
US10315573B2 (en) Method for displaying information to vehicle driver
US9656690B2 (en) System and method for using gestures in autonomous parking
KR20150137799A (en) Mobile terminal and method for controlling the same
US9646572B2 (en) Image processing apparatus
JP6156486B2 (en) Perimeter monitoring apparatus and program
CA2730379C (en) Vehicle user interface unit for a vehicle electronic device
JP2016503741A (en) Input device for automobile
CN103870802B (en) System and method using the user interface in paddy operation vehicle is referred to
US9001153B2 (en) System and apparatus for augmented reality display and controls
US8593417B2 (en) Operation apparatus for in-vehicle electronic device and method for controlling the same
US9008904B2 (en) Graphical vehicle command system for autonomous vehicles on full windshield head-up display
US9244527B2 (en) System, components and methodologies for gaze dependent gesture input control
JP4770931B2 (en) Display device
KR101677648B1 (en) Vehicle control apparatus and method thereof
EP2024198B1 (en) Vehicle display apparatus
DE60124539T2 (en) Safety device based on a head-up indicator for motor vehicles
KR101416378B1 (en) A display apparatus capable of moving image and the method thereof
US9902323B2 (en) Periphery surveillance apparatus and program
JP5905691B2 (en) Vehicle operation input device
JP5227164B2 (en) Information display method for transportation and combination instrument for automobile
EP1998996B1 (en) Interactive operating device and method for operating the interactive operating device

Legal Events

Date Code Title Description
STCV Information on status: appeal procedure

Free format text: APPEAL BRIEF (OR SUPPLEMENTAL BRIEF) ENTERED AND FORWARDED TO EXAMINER

STCV Information on status: appeal procedure

Free format text: BOARD OF APPEALS DECISION RENDERED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION