EP3334621A1 - Système, procédé et appareil pour véhicule et support lisible par ordinateur - Google Patents

Système, procédé et appareil pour véhicule et support lisible par ordinateur

Info

Publication number
EP3334621A1
EP3334621A1 EP15900656.8A EP15900656A EP3334621A1 EP 3334621 A1 EP3334621 A1 EP 3334621A1 EP 15900656 A EP15900656 A EP 15900656A EP 3334621 A1 EP3334621 A1 EP 3334621A1
Authority
EP
European Patent Office
Prior art keywords
image
operator
segmental
vehicle
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP15900656.8A
Other languages
German (de)
English (en)
Other versions
EP3334621A4 (fr
Inventor
Carsten Isert
Biyun ZHOU
Tao Xu
Lu Chen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bayerische Motoren Werke AG
Original Assignee
Bayerische Motoren Werke AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bayerische Motoren Werke AG filed Critical Bayerische Motoren Werke AG
Publication of EP3334621A1 publication Critical patent/EP3334621A1/fr
Publication of EP3334621A4 publication Critical patent/EP3334621A4/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/26Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/202Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used displaying a blind spot scene on the vehicle part responsible for the blind spot

Definitions

  • the present disclosure relates in general to a field of projection technologies used for a vehicle, and in more particular, to a system, method, and apparatus for vehicle and computer readable medium for projecting an image within the vehicle.
  • the present disclosure aims to provide a new and improved system, apparatus, and method for a vehicle for projecting images of blind spots onto the inner wall of the vehicle.
  • a system for a vehicle characterized in comprising: a projecting device, a first image capturing device configured to capture an image comprising an image of a blind spot, a second image capturing device configured to capture an image of head of an operator of the vehicle, and a controller.
  • the controller is configured to: determine a posture of the operator’s view based on the captured image of head of the operator; determine a segmental image to be extracted from the image captured by the first image capturing device based on the posture of the operator’s view, wherein the segmental image corresponds to the image of the blind spot, and the posture of the operator’s view comprises a position of view point and a direction of line of sight of the operator; and extract the segmental image, perform a transformation on the extracted segmental image, and cause the projecting device to project the transformed segmental image onto an internal component of the vehicle.
  • the controller may be further configured to: determine an occurrence of a change of the posture of the operator’s view based on the captured image of head of operator, and re-determine the segmental image to be extracted in response to the occurrence of the change.
  • the change of the posture of the operator’s view may comprise at least one of a change of the position of view point and a change of the direction of line of sight.
  • system may further comprises an inertial measurement device, configured to measure inertial data of the second image capturing device, and wherein the controller may be further configured to perform motion compensation on the image captured by the second image capturing device according to the inertial data.
  • determining a posture of the operator’s view may comprise: calculating values of distances among a plurality of feature points within the captured image of head of the operator, determining the position of view point according to the calculated values and based on a position of the second image capturing device, and, determining the direction of line of sight according to the calculated values.
  • determining a segmental image to be extracted may comprise: creating a virtual screen surrounding the vehicle, projecting the image captured by the first image capturing device onto the virtual screen, and, determining a portion of the image projected onto the virtual screen, which cannot be seen in the direction of line of sight of the operator from the position of view point of the operator due to blocking of the internal component, as the segmental image to be extracted.
  • the controller may be further configured to further extract relevant information from the extracted segmental image, perform the transformation on the extracted segmental image, and cause the projecting device to project the relevant information onto the internal component of the vehicle, wherein the relevant information relates to driving safety.
  • the internal component may an A pillar
  • the first image capturing device may be provided on an outside surface of the A pillar
  • the projecting device and the second image capturing device may be provided at a central console of the vehicle.
  • a computer-implemented method for a vehicle characterized in comprising: receiving, from a first image capturing device, an image comprising an image of a blind spot, receiving, from a second image capturing device, an image of head of an operator of the vehicle, determining a posture of the operator’s view based on the captured image of head of operator, determining a segmental image to be extracted from the image captured by the first image capturing device based on the posture of the operator’s view, wherein segmental image corresponds to the image of the blind spot, and the posture of the operator’s view comprises a position of view point and a direction of line of sight of the operator, and extracting the segmental image, performing a transformation on the extracted segmental image, and causing a projecting device to project the transformed segmental image onto an internal component of the vehicle.
  • the method may further comprise: determining an occurrence of a change of the posture of the operator’s view based on the captured image of head of operator, and re-determining the segmental image to be extracted in response to the occurrence of the change.
  • the change of the posture of the operator’s view may comprise at least one of a change of the position of view point and a change of the direction of line of sight.
  • the method may further comprise: receiving, from an inertial measurement device, inertial data of the second image capturing device, and performing motion compensation on the image captured by the second image capturing device according to the inertial data.
  • an apparatus for a vehicle characterized in comprising: a memory configured to store a series of computer executable instructions; and a processor configured to execute said series of computer executable instructions, wherein said series of computer executable instructions, when executed by the processor, cause the processor to perform operations of the steps of the above mentioned method.
  • a non-transitory computer readable medium having instructions stored thereon that, when executed by a processor, causing the processor to perform the steps of the above mentioned method is provided.
  • Fig. 1 illustrates a block diagram of a system for a vehicle in accordance with an exemplary embodiment of the present disclosure.
  • Fig. 2 illustrates a block diagram of an apparatus for a vehicle (i.e., the controller as shown in Fig. 1) in accordance with an exemplary embodiment of the present disclosure.
  • Figs. 3 are diagram illustrating an example of the system in accordance with an exemplary embodiment of the present disclosure
  • Fig. 4 illustrates a flow chart showing a process of determining an image to be projected on an A pillar according to the operator’s view in accordance with an exemplary embodiment of the present disclosure
  • Fig. 5 illustrates a general hardware environment wherein the present disclosure is applicable in accordance with an exemplary embodiment of the present disclosure.
  • vehicle used through the specification refers to a car, an airplane, a helicopter, a ship, or the like.
  • a or B used through the specification refers to “A and B” and “A or B” rather than meaning that A and B are exclusive, unless otherwise specified.
  • the system 100 comprises one or more first camera (s) 101 (corresponding to the first image capturing device) that may capture an image comprising an image of a blind spot, one or more second camera (s) 102 (corresponding to the second image capturing device) that may capture an image of head of an operator of the vehicle, an inertial measurement unit (IMU) 103 (corresponding to the inertial measurement device) that may measure inertial data of the camera (s) 102, a controller 104 that may control an overall operation of the system 100, and a projector 105 (corresponding to the projecting device) that may project an image onto an internal component of the vehicle.
  • IMU inertial measurement unit
  • the first camera (s) 101 may be any kind of on-vehicle cameras that are known to those skilled in the art.
  • the number of the first camera (s) 101 may be one, two, or more.
  • the existing cameras provided on the vehicle may be used as the first camera (s) 101.
  • the blind spot means the outside surroundings that cannot be seen from the operator’s view.
  • the blind spot means the outside surroundings that are blocked by an internal component of the vehicle and hence cannot be seen from the operator’s view.
  • the outside surroundings that are blocked by an A pillar, such as a left side A pillar, of the vehicle and hence cannot be seen from the operator’s view is a blind spot.
  • the outside surroundings that are blocked by the rear part of the vehicle and hence cannot be seen from the operator’s view is a blind spot.
  • the blind spot is not limited to these examples.
  • the camera (s) 101 are provided outside the vehicle so as to at least capture images of the outside surroundings that cannot be seen from the operator’s view. Note that the image of a wide range of vision will be captured by the camera (s) 101 in which the image of the blind spot is comprised.
  • a camera may be provided on outside surface of the left side A pillar, and the view of this camera may substantially cover all possible views of the operator during his driving.
  • the image data of the image captured by the camera (s) 101 may be transmitted to the controller via wire (s) or wirelessly.
  • the posture of the operator’s view may be defined by a position, such as a three-dimensional (3D) position, of view point and a direction of line of sight.
  • the position of view point may be the positions of two eyes of the operator. But for simplification, through the present specification, the position of view point refers to the position of the midpoint of line segment connecting the positions of the two eyes of the operator.
  • the direction of line of sight means a direction in which the operator looks.
  • the direction of the operator’s line of sight may reflect whether the operator is looking forward, looking up, looking down, looking left, looking right, or the like. More specifically, for example, the direction of the operator’s line of sight may also reflect whether the operator is looking 30 degrees left from forward or 60 degrees left from forward.
  • the horizontal visible angle of the human’s two eyes is about 120 degrees
  • the vertical visible angle of the human’s two eyes is about 60 degrees.
  • a cone preferably a rectangular pyramid with four equal lateral edges, may be used to virtually represent the operator’s view.
  • the apex of the cone may represent the position of view point.
  • a straight line which passes through the apex of the cone and is perpendicular to the bottom surface thereof, may represent the direction of the operator’s line of sight.
  • the apex angle of a triangle formed by making the cone intersect with a horizontal plane that comprises the apex of the cone may represent the horizontal visible angle
  • the apex angle of a triangle formed by making the cone intersect with a vertical plane that comprises the apex of the cone may represent the vertical visible angle.
  • the view of a camera or the view of a projector may be defined in a similar way, and hence may also be virtually represented with a cone similarly.
  • the view of the camera (s) 102 is represented with a rectangular pyramid as shown in Fig. 3. Note that other volume shape may also be used to simulate human’s view.
  • the second camera (s) 102 may be any kind of on-vehicle cameras that are known to those skilled in the art.
  • the number of the second camera (s) 102 may be one, two, or more.
  • the existing cameras provided on the vehicle may be used as the second camera (s) 102.
  • the second camera (s) 102 may be provided inside the vehicle so as to capture the image of the operator’s head.
  • a camera is provided at the central console of the vehicle as the second camera 102 so as to capture the image of the operator’s head when the operator is driving.
  • the captured image of the operator’s head may be used to determine a posture of the operator’s view. In order to determine the posture of the operator’s view more accurately, a pair of cameras may be used as the camera (s) 102. The details of such determination will be described hereinafter.
  • the image data of the image captured by the camera (s) 102 may be transmitted to the controller via wire (s) or wirelessly.
  • the camera (s) are used as the first and second image capturing devices.
  • One or more ultrasonic radar (s) , sonic radar (s) , and laser radar (s) may also be used as the first image capturing device or the second image capturing device. Any device that can capture an image and generate image data may be used as the first image capturing device or the second image capturing device.
  • the IMU 103 may measure inertial data of the second camera (s) 102.
  • the IMU may achieve the measurements on the acceleration and the angular velocity in six degrees of freedom.
  • the measured inertial data may be used to perform motion compensation on the image captured by the second camera (s) 102. After the compensation, the definition of the image captured by the second camera (s) 102 may be significantly improved.
  • the measured inertial data is transmitted to the controller 104 via wire (s) or wirelessly.
  • an IMU is used here as the inertial measurement device, but the present disclosure is not limited to this.
  • a combination of an accelerator and a gyroscope may be used as the inertial measurement device. Any device that may obtain the inertial data may be used as the inertial measurement device.
  • the IMU 103 may be provided anywhere on the vehicle, preferably be provided at the central console of the vehicle.
  • the controller 104 receives data from various components of the system 100, i.e., the first camera (s) 101, the second camera (s) 102, The IMU 103, and the projecting device 105. And, the controller 104 transmits control commands to the above-mentioned various components.
  • a connection line with a bi-directional arrow between various components represents a bi-directional communication line, which may be tangible wires or may be achieved wirelessly, such as via radio, RF, or the like.
  • the specific controlling operations performed by the controller 104 will be described in details with reference to Figs. 2-4 later.
  • the controller 104 may be a processor, a microprocessor or the like.
  • the controller 104 may be provided on the vehicle, for example, at the central console of the vehicle. Alternatively, the controller 104 may be provided remotely and may be accessed via various networks or the like.
  • the projector 105 may be a Cathode Ray Tube (CRT) projector, a Liquid Crystal Display (LCD) projector, a Digital Light Processor (DLP) projector or the like. Note that, the projector 105 is used here as the projecting device, but the present disclosure is not limited to this. Other devices that can project an image onto a certain internal component of the vehicle, such as a combination of light source (s) and a series of lens and mirrors, may also be used as the projecting device. A reflective material such as a retro-reflector may or may not be applied on an internal component of the vehicle, to which the image of the blind spot is to be projected. For example, the internal component is the left side A pillar.
  • CTR Cathode Ray Tube
  • LCD Liquid Crystal Display
  • DLP Digital Light Processor
  • the reflective material is not applied, and the image of the blind spot is directly projected to inner surface of the left side A pillar of the vehicle.
  • the projection can be adapted not only in the granularity, but also in the intensity of the projection.
  • the projecting device 105 may be also provided at the central console of the vehicle, in order to project the image of the blind spot onto, for example, the left side A pillar of the vehicle.
  • the types, numbers, and locations of the first camera (s) 101, the second camera (s) 102, the IMU 103, and the projector 105 are described in detail. But as can be easily understood by those skilled in the art, the types, numbers, and locations of the above components are not limited to the illustrated embodiment, and other types, numbers, and locations may be also used according to the actual requirements.
  • Fig. 2 illustrates a block diagram of an apparatus 200 for a vehicle (i.e., the controller 104 as shown in Fig. 1) in accordance with an exemplary embodiment of the present disclosure.
  • the blocks of the apparatus 200 may be implemented by hardware, software, firmware, or any combination thereof to carry out the principles of the present disclosure. It is understood by those skilled in the art that the blocks described in Fig. 2 may be combined or separated into sub-blocks to implement the principles of the present disclosure as described above. Therefore, the description herein may support any possible combination or separation or further definition of the blocks described herein.
  • the apparatus 200 for a vehicle may include a posture of view determination unit 201, a segmental image determination unit 202, a posture of view compensation unit 203 (optional) , a vibration compensation unit 204 (optional) , an extraction and transformation unit 205, and a relevant information extraction unit 206 (optional) .
  • the apparatus 200 may further comprises a reception unit and a transmission unit for receiving and transmitting information, instructions, or the like, respectively.
  • the posture of view determination unit 201 may be configured to receive the image of head of operator of the vehicle captured by the second camera (s) 102 (hereinafter being referred to as the Camera B) , determine a posture of the operator’s view based on the received image, and output the data representing the posture of the operator’s view to the segmental image determination unit 202.
  • the data representing the posture of the operator’s view are, for example, the 3D positions of the two eyes of the operator and the direction of the line of sight of the operator. In one embodiment of the present disclosure, such data may be calculated based on the image processing on the received image of head of the operator.
  • values of distances among a plurality of feature points within the captured image of head of the operator may be calculated, the position of view point may be determined according to the calculated values and based on a position of the Camera B, and, the direction of line of sight may be determined according to the calculated values.
  • the distances among a plurality of feature points within the captured image of head of the operator may be the distance between two eyes, the distance between two ears, the distance between one eye and the tip of the nose, and/or the like.
  • the position of view point may be determined with use of the known 3D position of the Camera B and a known knowledge base wherein statistics as to the distances among the feature points are stored.
  • the position of view point may be determined with use of the known 3D position of the Camera B and a known binocular vision algorithm or stereoscopic vision algorithm. Further, based on the values of such distances, the orientation of the face of the operator may be calculated, and then the direction of the line of sight of the operator, which may be consistent with the orientation of the face, may be determined accordingly. Any known image processing algorithms may be used for calculating the posture of the operator’s view. Alternatively, an eye tracker may be used to acquire the posture of the operator’s view. In such a case, the posture of view determination unit 201 may directly receive the data representing the posture of the operator’s view from the eye tracker.
  • the posture of the operator’s view may be looked up in a pre-stored table based on, for example, the above mentioned calculated distances.
  • the segmental image determination unit 202 may be configured to receive the data representing the posture of the operator’s view from the posture of view determination unit 201, determine a segmental image to be extracted from the image captured by the first camera (s) 101 (hereinafter being referred to as the Camera A) based on the posture of the operator’s view, and output the data representing the segmental image to be extracted to the extraction and transformation unit 205.
  • the operations of the segmental image determination unit 202 will be described in details.
  • the following descriptions will be given under the consumption that an image of a blind spot is projected onto the left side A pillar of the vehicle.
  • the Camera A may be provided on the outside surface of the left side A pillar.
  • the height and the direction of the Camera A’s view may be arranged to cover all possible views of the operator during his driving.
  • the operators’ views are different from person to person, but the statistics of the operators’ views will be considered to decide the height and the direction of the Camera A’s view.
  • Fig. 3 illustrates a case wherein the image of the blind spot is projected onto the left side A pillar of the vehicle. Fig. 3 will be described in details later.
  • the left side A pillar is considered, it can be understand that the image may be projected to the right side A pillar of the vehicle, both of the A pillars, one or both of the B pillars, one or both of the C pillars, the rear part of the vehicle, or the like in accordance with the same principles.
  • a virtual spherical surface (corresponding to the virtual screen) is created to surround the vehicle.
  • the center of the sphere may locate at the position of view point of the operator, and the radius of the sphere may be arbitrary as long as the sphere may at least surround the left front part of the vehicle (in the case wherein the left side A pillar is discussed as mentioned above) .
  • the image comprising image of blind spot captured by the Camera A is virtually projected onto the virtual spherical surface from the 3D position of the Camera A.
  • a portion of the image projected onto the virtual spherical surface which cannot be seen in the direction of line of sight of the operator from the position of view point of the operator due to blocking of an internal component such as the left side A pillar, is determined as the segmental image to be extracted.
  • a virtual cone simulating the posture of the operator’s view is projected onto the virtual spherical surface.
  • the view point is a light source
  • the light rays will extend within the cone and illuminate a portion of the image projected on the virtual spherical surface.
  • the left side A Pillar may be substantially virtualized to be a rectangle.
  • the shape of the virtualized rectangle will change according to the operator’ view. Further, a portion of the image projected on the virtual spherical surface which the assumed light rays cannot reach due to the blocking of such a rectangle may be determined as the segmental image to be extracted.
  • the process of the above determination comprises: creating a virtual screen, projecting the image captured by the Camera A onto the virtual screen, determining the segmental image on the virtual screen that cannot be seen due to the blocking of the A pillar at the operator’s view point position and with the direction of the operator’s view.
  • the data representing the segmental image to be extracted may be the coordinate values defining the boundary of the above portion that the assumed light rays cannot reach on the virtual spherical surface.
  • segmental image determination unit 202 described here is merely illustrative and the present disclosure is not limited hereto.
  • the other processes that may determine the image that is blocked by the left side A pillar also may be used.
  • a virtual spherical surface a virtual cylinder surface
  • a virtual plane may also be used.
  • the difference between the Camera A’s view and the operator’s view may be compensated with the above virtual screen creation method. From this view, it is preferable to set the radius of the virtual sphere to be large since the larger the virtual sphere is, the smaller the above mentioned difference is.
  • the extraction and transformation unit 205 is configured to receive the data representing the segmental image to be extracted, extract the segmental image from the image projected on the virtual spherical surface, perform a transformation on the extracted segmental image, and transmit commands to the projector 105 so as to cause it to project the transformed segmental image onto the left side A pillar.
  • the transformation may be a scale transformation and/or a rotation transformation and/or a translation transformation or the like.
  • the transformation may compensate the difference between the position of the projector and the position of the left side A pillar. It is understood that this position difference is pre-determined and hence is known. With use of this transformation, the extracted segmental image may be appropriately projected on the left A pillar.
  • the posture of view compensation unit 203 is an optional component of the controller 104.
  • the posture of view compensation unit 203 is configured to receive the captured image of head of operator of the vehicle from the Camera B, and determine whether at least one of the position of view point and the direction of line of sight of the operator’s view changes, and if yes, transmit commands to the segmental image determination unit 202 to cause it to re-determine the segmental image to be extracted.
  • the direction of line of sight of the operator may vary during the driving.
  • the scenes of the outstanding surroundings will change accordingly.
  • the scenes blocked by, for example, the left side A pillar also will change. That is, the segmental image to be extracted will change accordingly.
  • the heights of the eyes of the operators vary from person to person.
  • the segmental image to be extracted will also change according to the heights of the eyes.
  • the image finally projected on the A pillar will match the outstanding surroundings well from the operator’s view. That is, the image finally projected on the A pillar will be continuous with the outstanding surroundings from the operator’s view.
  • the vibration compensation unit 204 also is an optional component of the controller 104.
  • the vibration compensation unit 204 is configured to receive the inertial data of the Camera B from the IMU 103, receive the image data from the Camera B, perform motion compensation on the image captured by the Camera B according to the inertial data, and transmit the image that is subjected to the compensation to the posture of view determination unit 201 or the posture of view compensation unit 203.
  • the common motion compensation algorithms such as Range-Doppler algorithm, autofocus algorithm or the like, may be used here to perform the motion compensation. Other motion compensation algorithms also may be used.
  • the posture of the operator’s view may be determined with higher accuracy by the posture of view determination unit 201. Then the segmental image to be extracted may be determined with higher accuracy, too. As a result, the image finally projected on the A pillar will be continuous with the outstanding surroundings from the operator’s view.
  • the frame rate of the camera generally is not high.
  • the Camera B can output 5 frames per second.
  • the definition of the image of the Camera B is not high, too.
  • the image of the Camera B may be compensated in accordance with the inertial data and the definition of the image of the Camera B may be improved.
  • the relevant information extraction unit 206 also is an optional component of the controller 104. After the segmental image is extracted, the relevant information therein may be further extracted.
  • the relevant information refers to information related to driving safety. For example, the relevant information may relate to an image of an adjacent pedestrian such as a kid standing on a skateboard, an image of an adjacent moving vehicle such as an approaching bicycle, and the like.
  • the reorganization and extraction of the relevant information may be achieved with known image reorganization and extraction technologies.
  • the projection of the projector 105 may be initiated merely when the relevant information is recognized. Thus the power consumption of the whole system may be reduced. And, the overload of information to the operator may be avoided. Further, by merely projecting the relevant information rather than projecting the whole segmental image, the power consumption also may be reduced.
  • the relevant information extraction unit 206 may be configured to, after extracting the segmental image, further extract relevant information from the extracted segmental image, and transmit the extracted relevant information to the extraction and transformation unit 205.
  • the extracted relevant information after being transformed, is projected to the left side A pillar.
  • the relevant information extraction unit 206 may further be configured to, generate an alerting message and transmit it together with the extracted relevant information to the extraction and transformation unit 205, such that the alerting message may be projected together with the relevant information.
  • the alerting message may be, for example, a red exclamatory mark, a flickering circular mark, some characters, or the like to be projected in association with the relevant information.
  • the alerting message may be an animation (s) .
  • the alerting message may be projected in association with the extracted segmental image after being transformed.
  • the projection will be user friendly. Further, the security may be enhanced.
  • an altering voice such as “Pay Attention Please” may also be vocalized at the same time.
  • the Camera A 101 is provided on the upper portion of the outside surface of the left A pillar, the Camera B 102, the IMU 103, the controller 104, the projector 105 are integrated into a TransA-PillarBox (Transparent A-Pillar Box) and this TransA-PillarBox is provided at the central console of the vehicle.
  • the Camera A 101 may capture an image of outside surroundings, which comprises the outside surroundings that could not be seen by the operator due to the blocking of the left side A pillar.
  • the Camera B 102 may capture the image of the head of the operator. Thus the posture of the operator’s view may be determined.
  • the image to be projected on the A pillar may be determined according to the posture of the operator’s view and further may be re-determined according to change of the posture of the operator’s view.
  • the image to be projected on the A pillar correspond to the actual image of the outside surroundings which could not be seen by the operator due to the blocking of the left side A pillar.
  • the relevant information may be projected, or the relevant information along with the altering message may be projected, or the relevant information along with the altering message may be projected and meanwhile the altering voice may be vocalized.
  • the system of the present disclosure can be an add-on system, which means it can be removed from the car easily. It does not need any redesign work for the internal components used for projection of the vehicle at all. This means cost-effective. Further, one advantage of the projection method is that the projection does not have to be very bright and is not required to have a high resolution and therefore it is possible to save cost.
  • Fig. 4 illustrates a flow chart showing a method 400 for a vehicle in accordance with an exemplary embodiment of the present disclosure.
  • the steps of the method 400 presented below are intended to be illustrative. In some embodiments, method may be accomplished with one or more additional steps not described, and/or without one or more of the steps discussed. Additionally, the order in which the steps of method are illustrated in FIG. 4 and described as below is not intended to be limiting. In some embodiments, method may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information) .
  • processing devices e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information.
  • the one or more processing devices may include one or more modules executing some or all of the steps of method in response to instructions stored electronically on an electronic storage medium.
  • the one or more processing modules may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the steps of method.
  • the method 400 is described under the case as shown in Fig. 3.
  • the method 400 starts from step 401, at which the apparatus 200, for example, provided in a vehicle starts up and begins to receive image data from the Camera A. Then, the received image data is transmitted to the segmental image determination unit 202.
  • the apparatus 200 may start up upon being requested by the operator, or may start up automatically when the vehicle is moving.
  • the apparatus 200 may be powered by a storage cell within the vehicle.
  • the apparatus 200 receives image data from the Camera B and transmits the received image data to the posture of view determination unit 201 or to the vibration compensation unit 204.
  • the vibration compensation unit 204 receives inertial data obtained by the IMU 103, receives the image data of the Camera B, and performs motion compensation on the image captured by the Camera B.
  • the vibration compensation unit 204 further transmits the image data that is subjected to the compensation to the posture of view determination unit 201 or the posture of view compensation unit 203.
  • the posture of view determination unit 201 receives the image data of the Camera B, determines the posture of the operator’s view based on the image data received from the Camera B, and transmits the data about the posture of the operator’s view to the segmental image determination unit 202.
  • the posture of view determination unit 201 receives the image data of the operator’ head from the vibration compensation unit 204.
  • the posture of view determination unit 201 determines the position of view point and a direction of line of sight of the operator based on the image captured by the Camera B. The process of the determination is discussed previously and is not repeated here again.
  • the segmental image determination unit 202 receives the image data captured by the Camera A and the data about the posture of the operator’s view, and determines the segmental image to be extracted from the image captured by the Camera A based on the posture of the operator’s view.
  • the process of the determination is as follows. First, a virtual spherical surface is created. Second, the image captured by the Camera A is projected on the virtual spherical surface. Third, a virtual cone representing the posture of the operator’s view is made to be intersected with the virtual spherical surface, so as to determine the image that should be seen by the operator. Fourth, the left side A Pillar is virtualized to be a rectangle.
  • the shape of the virtualized rectangle will change according to the operator’ view. Assume that the view point is a light source, and the light rays extend within the above cone. A portion on the virtual spherical surface which the assumed light rays cannot reach due to the blocking of the rectangle representing the A pillar may be determined as the segmental image to be extracted.
  • the extraction and transformation unit 205 receives the data regarding the segmental image to be extracted, extracts the segmental image determined at the step 405, performs a transformation on the extracted segmental image, and causes the projector to project the transformed image onto the A pillar.
  • step 406 it may further optionally comprise extracting relevant information from the extracted segmental image, transforming the relevant information, and causing the projector to project the transformed relevant information.
  • it may further comprise superimposing an alerting message on the transformed relevant information.
  • the alerting message may be superimposed onto the extracted segmental image.
  • the posture of view compensation unit 203 receives the image data captured by the Camera B or the image after being compensated, and determines whether the operator’s view changes. If yes, the method returns to the step 405 to re-determine the segmental image to be extracted. Or otherwise, the method ends.
  • Fig. 5 illustrates a general hardware environment 500 wherein the present disclosure is applicable in accordance with an exemplary embodiment of the present disclosure.
  • the computing device 500 may be any machine configured to perform processing and/or calculations, may be but is not limited to a work station, a server, a desktop computer, a laptop computer, a tablet computer, a personal data assistant, a smart phone, an on-vehicle computer or any combination thereof.
  • the aforementioned apparatus 200 may be wholly or at least partially implemented by the computing device 500 or a similar device or system.
  • the computing device 500 may comprise elements that are connected with or in communication with a bus 502, possibly via one or more interfaces.
  • the computing device 500 may comprise the bus 502, and one or more processors 504, one or more input devices 506 and one or more output devices 508.
  • the one or more processors 504 may be any kinds of processors, and may comprise but are not limited to one or more general-purpose processors and/or one or more special-purpose processors (such as special processing chips) .
  • the input devices 506 may be any kinds of devices that can input information to the computing device, and may comprise but are not limited to a mouse, a keyboard, a touch screen, a microphone and/or a remote control.
  • the output devices 508 may be any kinds of devices that can present information, and may comprise but are not limited to display, a speaker, a video/audio output terminal, a vibrator and/or a printer.
  • the computing device 500 may also comprise or be connected with non-transitory storage devices 510 which may be any storage devices that are non-transitory and can implement data stores, and may comprise but are not limited to a disk drive, an optical storage device, a solid-state storage, a floppy disk, a flexible disk, hard disk, a magnetic tape or any other magnetic medium, a compact disc or any other optical medium, a ROM (Read Only Memory) , a RAM (Random Access Memory) , a cache memory and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions and/or code.
  • non-transitory storage devices 510 may be any storage devices that are non-transitory and can implement data stores, and may comprise but are not limited to a disk drive, an optical storage device
  • the non-transitory storage devices 510 may be detachable from an interface.
  • the non-transitory storage devices 510 may have data/instructions/code for implementing the methods and steps which are described above.
  • the computing device 500 may also comprise a communication device 512.
  • the communication device 512 may be any kinds of device or system that can enable communication with external apparatuses and/or with a network, and may comprise but are not limited to a modem, a network card, an infrared communication device, a wireless communication device and/or a chipset such as a Bluetooth TM device, 1302.11 device, WiFi device, WiMax device, cellular communication facilities and/or the like.
  • the computing device 500 When the computing device 500 is used as an on-vehicle device, it may also be connected to external device, for example, a GPS receiver, sensors for sensing different environmental data such as an acceleration sensor, a wheel speed sensor, a gyroscope and so on.In this way, the computing device 500 may, for example, receive location data and sensor data indicating the travelling situation of the vehicle.
  • external device for example, a GPS receiver, sensors for sensing different environmental data such as an acceleration sensor, a wheel speed sensor, a gyroscope and so on.
  • the computing device 500 may, for example, receive location data and sensor data indicating the travelling situation of the vehicle.
  • other facilities such as an engine system, a wiper, an anti-lock Braking System or the like
  • non-transitory storage device 510 may have map information and software elements so that the processor 504 may perform route guidance processing.
  • the output device 506 may comprise a display for displaying the map, the location mark of the vehicle and also images indicating the travelling situation of the vehicle.
  • the output device 506 may also comprise a speaker or interface with an ear phone for audio guidance.
  • the bus 502 may include but is not limited to Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus. Particularly, for an on-vehicle device, the bus 502 may also include a Controller Area Network (CAN) bus or other architectures designed for application on an automobile.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • CAN Controller Area Network
  • the computing device 500 may also comprise a working memory 514, which may be any kind of working memory that may store instructions and/or data useful for the working of the processor 504, and may comprise but is not limited to a random access memory and/or a read-only memory device.
  • working memory 514 may be any kind of working memory that may store instructions and/or data useful for the working of the processor 504, and may comprise but is not limited to a random access memory and/or a read-only memory device.
  • Software elements may be located in the working memory 514, including but are not limited to an operating system 516, one or more application programs 518, drivers and/or other data and codes. Instructions for performing the methods and steps described in the above may be comprised in the one or more application programs 518, and the units of the aforementioned apparatus 200 may be implemented by the processor 504 reading and executing the instructions of the one or more application programs 518. More specifically, the posture of view determination unit 201 of the aforementioned apparatus 200 may, for example, be implemented by the processor 504 when executing an application 518 having instructions to perform the step S404.
  • segmental image determination unit 202 of the aforementioned apparatus 200 may, for example, be implemented by the processor 504 when executing an application 518 having instructions to perform the step S405.
  • Other units of the aforementioned apparatus 200 may also, for example, be implemented by the processor 504 when executing an application 518 having instructions to perform one or more of the aforementioned respective steps.
  • the executable codes or source codes of the instructions of the software elements may be stored in a non-transitory computer-readable storage medium, such as the storage device (s) 510 described above, and may be read into the working memory 514 possibly with compilation and/or installation.
  • the executable codes or source codes of the instructions of the software elements may also be downloaded from a remote location.
  • the present disclosure may be implemented by software with necessary hardware, or by hardware, firmware and the like. Based on such understanding, the embodiments of the present disclosure may be embodied in part in a software form.
  • the computer software may be stored in a readable storage medium such as a floppy disk, a hard disk, an optical disk or a flash memory of the computer.
  • the computer software comprises a series of instructions to make the computer (e.g., a personal computer, a service station or a network terminal) execute the method or a part thereof according to respective embodiment of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Studio Devices (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

L'invention concerne un système, un procédé et un appareil pour véhicule permettant de projeter une image à l'intérieur du véhicule. Le système pour un véhicule comprend : un dispositif de projection (105), un premier dispositif de capture d'image (101) configuré pour capturer une image comprenant une image d'un angle mort, un deuxième dispositif de capture d'image (102) configuré pour capturer une image de la tête d'un opérateur du véhicule, et un système de commande (104). Le système de commande est configuré pour déterminer une posture de vision de l'opérateur en fonction de l'image capturée de la tête de l'opérateur; déterminer une image segmentaire à extraire de l'image capturée par le premier dispositif de capture d'image en fonction de la posture de vision de l'opérateur, où l'image segmentaire correspond à l'image de l'angle mort, et la posture de vision de l'opérateur comprend une position de point de vue et une direction de la ligne de vue de l'opérateur; extraire l'image segmentaire, effectuer une transformation sur l'image segmentaire extraite, et provoquer la projection par le dispositif de projection de l'image segmentaire transformée sur un composant intérieur du véhicule. Le système peut afficher des images d'angles morts sur un écran dans un véhicule.
EP15900656.8A 2015-08-10 2015-08-10 Système, procédé et appareil pour véhicule et support lisible par ordinateur Withdrawn EP3334621A4 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/086439 WO2017024458A1 (fr) 2015-08-10 2015-08-10 Système, procédé et appareil pour véhicule et support lisible par ordinateur

Publications (2)

Publication Number Publication Date
EP3334621A1 true EP3334621A1 (fr) 2018-06-20
EP3334621A4 EP3334621A4 (fr) 2019-03-13

Family

ID=57982868

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15900656.8A Withdrawn EP3334621A4 (fr) 2015-08-10 2015-08-10 Système, procédé et appareil pour véhicule et support lisible par ordinateur

Country Status (3)

Country Link
EP (1) EP3334621A4 (fr)
CN (1) CN107848460A (fr)
WO (1) WO2017024458A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349472B (zh) * 2018-04-02 2021-08-06 北京五一视界数字孪生科技股份有限公司 一种虚拟驾驶应用中虚拟方向盘和真实方向盘对接方法
CN108340836A (zh) * 2018-04-13 2018-07-31 华域视觉科技(上海)有限公司 一种汽车a柱显示系统
CN111731187A (zh) * 2020-06-19 2020-10-02 杭州视为科技有限公司 一种汽车a柱盲区图像显示系统及方法

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006027563A1 (fr) * 2004-09-06 2006-03-16 Mch Technology Limited Systeme d'amelioration de la visibilite destine a un vehicule
KR100594121B1 (ko) * 2004-12-17 2006-06-28 삼성전자주식회사 카메라 렌즈 어셈블리의 손떨림 보정 장치
JP4683272B2 (ja) 2005-04-14 2011-05-18 アイシン・エィ・ダブリュ株式会社 車両外部表示方法及び表示装置
JP4810953B2 (ja) * 2005-10-07 2011-11-09 日産自動車株式会社 車両用死角映像表示装置
JP4497133B2 (ja) * 2006-07-12 2010-07-07 アイシン・エィ・ダブリュ株式会社 運転支援方法及び運転支援装置
JP4412365B2 (ja) * 2007-03-26 2010-02-10 アイシン・エィ・ダブリュ株式会社 運転支援方法及び運転支援装置
EP3594853A3 (fr) * 2007-05-03 2020-04-08 Sony Deutschland GmbH Procédé de détection d'objets en mouvement dans un angle mort d'un véhicule et dispositif de détection d'un angle mort
JP4412380B2 (ja) * 2007-10-02 2010-02-10 アイシン・エィ・ダブリュ株式会社 運転支援装置、運転支援方法及びコンピュータプログラム
US20130096820A1 (en) * 2011-10-14 2013-04-18 Continental Automotive Systems, Inc. Virtual display system for a vehicle
JP2015027852A (ja) * 2013-07-30 2015-02-12 トヨタ自動車株式会社 運転支援装置

Also Published As

Publication number Publication date
EP3334621A4 (fr) 2019-03-13
WO2017024458A1 (fr) 2017-02-16
CN107848460A (zh) 2018-03-27

Similar Documents

Publication Publication Date Title
JP6524422B2 (ja) 表示制御装置、表示装置、表示制御プログラム、及び表示制御方法
US8536995B2 (en) Information display apparatus and information display method
US9563981B2 (en) Information processing apparatus, information processing method, and program
JP5962594B2 (ja) 車載表示装置およびプログラム
US11181737B2 (en) Head-up display device for displaying display items having movement attribute or fixed attribute, display control method, and control program
US10843686B2 (en) Augmented reality (AR) visualization of advanced driver-assistance system
US20180056861A1 (en) Vehicle-mounted augmented reality systems, methods, and devices
JP6695049B2 (ja) 表示装置及び表示制御方法
JPWO2016067574A1 (ja) 表示制御装置及び表示制御プログラム
EP4339938A1 (fr) Procédé et appareil de projection, et véhicule et ar-hud
US20130289875A1 (en) Navigation apparatus
US10922976B2 (en) Display control device configured to control projection device, display control method for controlling projection device, and vehicle
TWI522257B (zh) 車用安全系統及其運作方法
KR102052405B1 (ko) 차량 및 사용자 동작 인식에 따른 화면 제어 장치 및 그 운영방법
US11626028B2 (en) System and method for providing vehicle function guidance and virtual test-driving experience based on augmented reality content
CN111033607A (zh) 显示系统、信息呈现系统、显示系统的控制方法、程序以及移动体
WO2017024458A1 (fr) Système, procédé et appareil pour véhicule et support lisible par ordinateur
JP6186905B2 (ja) 車載表示装置およびプログラム
JP7127565B2 (ja) 表示制御装置及び表示制御プログラム
KR101611167B1 (ko) 운전시야 지원장치
US10876853B2 (en) Information presentation device, information presentation method, and storage medium
JP2018206210A (ja) 衝突事故抑制システム及び衝突事故抑制方法
JP7143728B2 (ja) 重畳画像表示装置及びコンピュータプログラム
JP7338632B2 (ja) 表示装置
WO2023145852A1 (fr) Dispositif de commande d'affichage, système d'affichage et procédé de commande d'affichage

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20171205

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

RIN1 Information on inventor provided before grant (corrected)

Inventor name: ZHOU, BIYUN

Inventor name: CHEN, LU

Inventor name: ISERT, CARSTEN

Inventor name: XU, TAO

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20190212

RIC1 Information provided on ipc code assigned before grant

Ipc: H04N 5/262 20060101ALI20190206BHEP

Ipc: B60R 1/00 20060101AFI20190206BHEP

17Q First examination report despatched

Effective date: 20190925

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20200206