WO2017024458A1 - System, method and apparatus for vehicle and computer readable medium - Google Patents

System, method and apparatus for vehicle and computer readable medium Download PDF

Info

Publication number
WO2017024458A1
WO2017024458A1 PCT/CN2015/086439 CN2015086439W WO2017024458A1 WO 2017024458 A1 WO2017024458 A1 WO 2017024458A1 CN 2015086439 W CN2015086439 W CN 2015086439W WO 2017024458 A1 WO2017024458 A1 WO 2017024458A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
operator
segmental
vehicle
view
Prior art date
Application number
PCT/CN2015/086439
Other languages
French (fr)
Inventor
Carsten Isert
Biyun ZHOU
Tao Xu
Lu Chen
Original Assignee
Bayerische Motoren Werke Aktiengesellschaft
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bayerische Motoren Werke Aktiengesellschaft filed Critical Bayerische Motoren Werke Aktiengesellschaft
Priority to CN201580081763.6A priority Critical patent/CN107848460A/en
Priority to PCT/CN2015/086439 priority patent/WO2017024458A1/en
Priority to EP15900656.8A priority patent/EP3334621A4/en
Publication of WO2017024458A1 publication Critical patent/WO2017024458A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/20Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • B60R1/22Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle
    • B60R1/23Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view
    • B60R1/26Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles for viewing an area outside the vehicle, e.g. the exterior of the vehicle with a predetermined field of view to the rear of the vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/202Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used displaying a blind spot scene on the vehicle part responsible for the blind spot

Definitions

  • the present disclosure relates in general to a field of projection technologies used for a vehicle, and in more particular, to a system, method, and apparatus for vehicle and computer readable medium for projecting an image within the vehicle.
  • the present disclosure aims to provide a new and improved system, apparatus, and method for a vehicle for projecting images of blind spots onto the inner wall of the vehicle.
  • a system for a vehicle characterized in comprising: a projecting device, a first image capturing device configured to capture an image comprising an image of a blind spot, a second image capturing device configured to capture an image of head of an operator of the vehicle, and a controller.
  • the controller is configured to: determine a posture of the operator’s view based on the captured image of head of the operator; determine a segmental image to be extracted from the image captured by the first image capturing device based on the posture of the operator’s view, wherein the segmental image corresponds to the image of the blind spot, and the posture of the operator’s view comprises a position of view point and a direction of line of sight of the operator; and extract the segmental image, perform a transformation on the extracted segmental image, and cause the projecting device to project the transformed segmental image onto an internal component of the vehicle.
  • the controller may be further configured to: determine an occurrence of a change of the posture of the operator’s view based on the captured image of head of operator, and re-determine the segmental image to be extracted in response to the occurrence of the change.
  • the change of the posture of the operator’s view may comprise at least one of a change of the position of view point and a change of the direction of line of sight.
  • system may further comprises an inertial measurement device, configured to measure inertial data of the second image capturing device, and wherein the controller may be further configured to perform motion compensation on the image captured by the second image capturing device according to the inertial data.
  • determining a posture of the operator’s view may comprise: calculating values of distances among a plurality of feature points within the captured image of head of the operator, determining the position of view point according to the calculated values and based on a position of the second image capturing device, and, determining the direction of line of sight according to the calculated values.
  • determining a segmental image to be extracted may comprise: creating a virtual screen surrounding the vehicle, projecting the image captured by the first image capturing device onto the virtual screen, and, determining a portion of the image projected onto the virtual screen, which cannot be seen in the direction of line of sight of the operator from the position of view point of the operator due to blocking of the internal component, as the segmental image to be extracted.
  • the controller may be further configured to further extract relevant information from the extracted segmental image, perform the transformation on the extracted segmental image, and cause the projecting device to project the relevant information onto the internal component of the vehicle, wherein the relevant information relates to driving safety.
  • the internal component may an A pillar
  • the first image capturing device may be provided on an outside surface of the A pillar
  • the projecting device and the second image capturing device may be provided at a central console of the vehicle.
  • a computer-implemented method for a vehicle characterized in comprising: receiving, from a first image capturing device, an image comprising an image of a blind spot, receiving, from a second image capturing device, an image of head of an operator of the vehicle, determining a posture of the operator’s view based on the captured image of head of operator, determining a segmental image to be extracted from the image captured by the first image capturing device based on the posture of the operator’s view, wherein segmental image corresponds to the image of the blind spot, and the posture of the operator’s view comprises a position of view point and a direction of line of sight of the operator, and extracting the segmental image, performing a transformation on the extracted segmental image, and causing a projecting device to project the transformed segmental image onto an internal component of the vehicle.
  • the method may further comprise: determining an occurrence of a change of the posture of the operator’s view based on the captured image of head of operator, and re-determining the segmental image to be extracted in response to the occurrence of the change.
  • the change of the posture of the operator’s view may comprise at least one of a change of the position of view point and a change of the direction of line of sight.
  • the method may further comprise: receiving, from an inertial measurement device, inertial data of the second image capturing device, and performing motion compensation on the image captured by the second image capturing device according to the inertial data.
  • an apparatus for a vehicle characterized in comprising: a memory configured to store a series of computer executable instructions; and a processor configured to execute said series of computer executable instructions, wherein said series of computer executable instructions, when executed by the processor, cause the processor to perform operations of the steps of the above mentioned method.
  • a non-transitory computer readable medium having instructions stored thereon that, when executed by a processor, causing the processor to perform the steps of the above mentioned method is provided.
  • Fig. 1 illustrates a block diagram of a system for a vehicle in accordance with an exemplary embodiment of the present disclosure.
  • Fig. 2 illustrates a block diagram of an apparatus for a vehicle (i.e., the controller as shown in Fig. 1) in accordance with an exemplary embodiment of the present disclosure.
  • Figs. 3 are diagram illustrating an example of the system in accordance with an exemplary embodiment of the present disclosure
  • Fig. 4 illustrates a flow chart showing a process of determining an image to be projected on an A pillar according to the operator’s view in accordance with an exemplary embodiment of the present disclosure
  • Fig. 5 illustrates a general hardware environment wherein the present disclosure is applicable in accordance with an exemplary embodiment of the present disclosure.
  • vehicle used through the specification refers to a car, an airplane, a helicopter, a ship, or the like.
  • a or B used through the specification refers to “A and B” and “A or B” rather than meaning that A and B are exclusive, unless otherwise specified.
  • the system 100 comprises one or more first camera (s) 101 (corresponding to the first image capturing device) that may capture an image comprising an image of a blind spot, one or more second camera (s) 102 (corresponding to the second image capturing device) that may capture an image of head of an operator of the vehicle, an inertial measurement unit (IMU) 103 (corresponding to the inertial measurement device) that may measure inertial data of the camera (s) 102, a controller 104 that may control an overall operation of the system 100, and a projector 105 (corresponding to the projecting device) that may project an image onto an internal component of the vehicle.
  • IMU inertial measurement unit
  • the first camera (s) 101 may be any kind of on-vehicle cameras that are known to those skilled in the art.
  • the number of the first camera (s) 101 may be one, two, or more.
  • the existing cameras provided on the vehicle may be used as the first camera (s) 101.
  • the blind spot means the outside surroundings that cannot be seen from the operator’s view.
  • the blind spot means the outside surroundings that are blocked by an internal component of the vehicle and hence cannot be seen from the operator’s view.
  • the outside surroundings that are blocked by an A pillar, such as a left side A pillar, of the vehicle and hence cannot be seen from the operator’s view is a blind spot.
  • the outside surroundings that are blocked by the rear part of the vehicle and hence cannot be seen from the operator’s view is a blind spot.
  • the blind spot is not limited to these examples.
  • the camera (s) 101 are provided outside the vehicle so as to at least capture images of the outside surroundings that cannot be seen from the operator’s view. Note that the image of a wide range of vision will be captured by the camera (s) 101 in which the image of the blind spot is comprised.
  • a camera may be provided on outside surface of the left side A pillar, and the view of this camera may substantially cover all possible views of the operator during his driving.
  • the image data of the image captured by the camera (s) 101 may be transmitted to the controller via wire (s) or wirelessly.
  • the posture of the operator’s view may be defined by a position, such as a three-dimensional (3D) position, of view point and a direction of line of sight.
  • the position of view point may be the positions of two eyes of the operator. But for simplification, through the present specification, the position of view point refers to the position of the midpoint of line segment connecting the positions of the two eyes of the operator.
  • the direction of line of sight means a direction in which the operator looks.
  • the direction of the operator’s line of sight may reflect whether the operator is looking forward, looking up, looking down, looking left, looking right, or the like. More specifically, for example, the direction of the operator’s line of sight may also reflect whether the operator is looking 30 degrees left from forward or 60 degrees left from forward.
  • the horizontal visible angle of the human’s two eyes is about 120 degrees
  • the vertical visible angle of the human’s two eyes is about 60 degrees.
  • a cone preferably a rectangular pyramid with four equal lateral edges, may be used to virtually represent the operator’s view.
  • the apex of the cone may represent the position of view point.
  • a straight line which passes through the apex of the cone and is perpendicular to the bottom surface thereof, may represent the direction of the operator’s line of sight.
  • the apex angle of a triangle formed by making the cone intersect with a horizontal plane that comprises the apex of the cone may represent the horizontal visible angle
  • the apex angle of a triangle formed by making the cone intersect with a vertical plane that comprises the apex of the cone may represent the vertical visible angle.
  • the view of a camera or the view of a projector may be defined in a similar way, and hence may also be virtually represented with a cone similarly.
  • the view of the camera (s) 102 is represented with a rectangular pyramid as shown in Fig. 3. Note that other volume shape may also be used to simulate human’s view.
  • the second camera (s) 102 may be any kind of on-vehicle cameras that are known to those skilled in the art.
  • the number of the second camera (s) 102 may be one, two, or more.
  • the existing cameras provided on the vehicle may be used as the second camera (s) 102.
  • the second camera (s) 102 may be provided inside the vehicle so as to capture the image of the operator’s head.
  • a camera is provided at the central console of the vehicle as the second camera 102 so as to capture the image of the operator’s head when the operator is driving.
  • the captured image of the operator’s head may be used to determine a posture of the operator’s view. In order to determine the posture of the operator’s view more accurately, a pair of cameras may be used as the camera (s) 102. The details of such determination will be described hereinafter.
  • the image data of the image captured by the camera (s) 102 may be transmitted to the controller via wire (s) or wirelessly.
  • the camera (s) are used as the first and second image capturing devices.
  • One or more ultrasonic radar (s) , sonic radar (s) , and laser radar (s) may also be used as the first image capturing device or the second image capturing device. Any device that can capture an image and generate image data may be used as the first image capturing device or the second image capturing device.
  • the IMU 103 may measure inertial data of the second camera (s) 102.
  • the IMU may achieve the measurements on the acceleration and the angular velocity in six degrees of freedom.
  • the measured inertial data may be used to perform motion compensation on the image captured by the second camera (s) 102. After the compensation, the definition of the image captured by the second camera (s) 102 may be significantly improved.
  • the measured inertial data is transmitted to the controller 104 via wire (s) or wirelessly.
  • an IMU is used here as the inertial measurement device, but the present disclosure is not limited to this.
  • a combination of an accelerator and a gyroscope may be used as the inertial measurement device. Any device that may obtain the inertial data may be used as the inertial measurement device.
  • the IMU 103 may be provided anywhere on the vehicle, preferably be provided at the central console of the vehicle.
  • the controller 104 receives data from various components of the system 100, i.e., the first camera (s) 101, the second camera (s) 102, The IMU 103, and the projecting device 105. And, the controller 104 transmits control commands to the above-mentioned various components.
  • a connection line with a bi-directional arrow between various components represents a bi-directional communication line, which may be tangible wires or may be achieved wirelessly, such as via radio, RF, or the like.
  • the specific controlling operations performed by the controller 104 will be described in details with reference to Figs. 2-4 later.
  • the controller 104 may be a processor, a microprocessor or the like.
  • the controller 104 may be provided on the vehicle, for example, at the central console of the vehicle. Alternatively, the controller 104 may be provided remotely and may be accessed via various networks or the like.
  • the projector 105 may be a Cathode Ray Tube (CRT) projector, a Liquid Crystal Display (LCD) projector, a Digital Light Processor (DLP) projector or the like. Note that, the projector 105 is used here as the projecting device, but the present disclosure is not limited to this. Other devices that can project an image onto a certain internal component of the vehicle, such as a combination of light source (s) and a series of lens and mirrors, may also be used as the projecting device. A reflective material such as a retro-reflector may or may not be applied on an internal component of the vehicle, to which the image of the blind spot is to be projected. For example, the internal component is the left side A pillar.
  • CTR Cathode Ray Tube
  • LCD Liquid Crystal Display
  • DLP Digital Light Processor
  • the reflective material is not applied, and the image of the blind spot is directly projected to inner surface of the left side A pillar of the vehicle.
  • the projection can be adapted not only in the granularity, but also in the intensity of the projection.
  • the projecting device 105 may be also provided at the central console of the vehicle, in order to project the image of the blind spot onto, for example, the left side A pillar of the vehicle.
  • the types, numbers, and locations of the first camera (s) 101, the second camera (s) 102, the IMU 103, and the projector 105 are described in detail. But as can be easily understood by those skilled in the art, the types, numbers, and locations of the above components are not limited to the illustrated embodiment, and other types, numbers, and locations may be also used according to the actual requirements.
  • Fig. 2 illustrates a block diagram of an apparatus 200 for a vehicle (i.e., the controller 104 as shown in Fig. 1) in accordance with an exemplary embodiment of the present disclosure.
  • the blocks of the apparatus 200 may be implemented by hardware, software, firmware, or any combination thereof to carry out the principles of the present disclosure. It is understood by those skilled in the art that the blocks described in Fig. 2 may be combined or separated into sub-blocks to implement the principles of the present disclosure as described above. Therefore, the description herein may support any possible combination or separation or further definition of the blocks described herein.
  • the apparatus 200 for a vehicle may include a posture of view determination unit 201, a segmental image determination unit 202, a posture of view compensation unit 203 (optional) , a vibration compensation unit 204 (optional) , an extraction and transformation unit 205, and a relevant information extraction unit 206 (optional) .
  • the apparatus 200 may further comprises a reception unit and a transmission unit for receiving and transmitting information, instructions, or the like, respectively.
  • the posture of view determination unit 201 may be configured to receive the image of head of operator of the vehicle captured by the second camera (s) 102 (hereinafter being referred to as the Camera B) , determine a posture of the operator’s view based on the received image, and output the data representing the posture of the operator’s view to the segmental image determination unit 202.
  • the data representing the posture of the operator’s view are, for example, the 3D positions of the two eyes of the operator and the direction of the line of sight of the operator. In one embodiment of the present disclosure, such data may be calculated based on the image processing on the received image of head of the operator.
  • values of distances among a plurality of feature points within the captured image of head of the operator may be calculated, the position of view point may be determined according to the calculated values and based on a position of the Camera B, and, the direction of line of sight may be determined according to the calculated values.
  • the distances among a plurality of feature points within the captured image of head of the operator may be the distance between two eyes, the distance between two ears, the distance between one eye and the tip of the nose, and/or the like.
  • the position of view point may be determined with use of the known 3D position of the Camera B and a known knowledge base wherein statistics as to the distances among the feature points are stored.
  • the position of view point may be determined with use of the known 3D position of the Camera B and a known binocular vision algorithm or stereoscopic vision algorithm. Further, based on the values of such distances, the orientation of the face of the operator may be calculated, and then the direction of the line of sight of the operator, which may be consistent with the orientation of the face, may be determined accordingly. Any known image processing algorithms may be used for calculating the posture of the operator’s view. Alternatively, an eye tracker may be used to acquire the posture of the operator’s view. In such a case, the posture of view determination unit 201 may directly receive the data representing the posture of the operator’s view from the eye tracker.
  • the posture of the operator’s view may be looked up in a pre-stored table based on, for example, the above mentioned calculated distances.
  • the segmental image determination unit 202 may be configured to receive the data representing the posture of the operator’s view from the posture of view determination unit 201, determine a segmental image to be extracted from the image captured by the first camera (s) 101 (hereinafter being referred to as the Camera A) based on the posture of the operator’s view, and output the data representing the segmental image to be extracted to the extraction and transformation unit 205.
  • the operations of the segmental image determination unit 202 will be described in details.
  • the following descriptions will be given under the consumption that an image of a blind spot is projected onto the left side A pillar of the vehicle.
  • the Camera A may be provided on the outside surface of the left side A pillar.
  • the height and the direction of the Camera A’s view may be arranged to cover all possible views of the operator during his driving.
  • the operators’ views are different from person to person, but the statistics of the operators’ views will be considered to decide the height and the direction of the Camera A’s view.
  • Fig. 3 illustrates a case wherein the image of the blind spot is projected onto the left side A pillar of the vehicle. Fig. 3 will be described in details later.
  • the left side A pillar is considered, it can be understand that the image may be projected to the right side A pillar of the vehicle, both of the A pillars, one or both of the B pillars, one or both of the C pillars, the rear part of the vehicle, or the like in accordance with the same principles.
  • a virtual spherical surface (corresponding to the virtual screen) is created to surround the vehicle.
  • the center of the sphere may locate at the position of view point of the operator, and the radius of the sphere may be arbitrary as long as the sphere may at least surround the left front part of the vehicle (in the case wherein the left side A pillar is discussed as mentioned above) .
  • the image comprising image of blind spot captured by the Camera A is virtually projected onto the virtual spherical surface from the 3D position of the Camera A.
  • a portion of the image projected onto the virtual spherical surface which cannot be seen in the direction of line of sight of the operator from the position of view point of the operator due to blocking of an internal component such as the left side A pillar, is determined as the segmental image to be extracted.
  • a virtual cone simulating the posture of the operator’s view is projected onto the virtual spherical surface.
  • the view point is a light source
  • the light rays will extend within the cone and illuminate a portion of the image projected on the virtual spherical surface.
  • the left side A Pillar may be substantially virtualized to be a rectangle.
  • the shape of the virtualized rectangle will change according to the operator’ view. Further, a portion of the image projected on the virtual spherical surface which the assumed light rays cannot reach due to the blocking of such a rectangle may be determined as the segmental image to be extracted.
  • the process of the above determination comprises: creating a virtual screen, projecting the image captured by the Camera A onto the virtual screen, determining the segmental image on the virtual screen that cannot be seen due to the blocking of the A pillar at the operator’s view point position and with the direction of the operator’s view.
  • the data representing the segmental image to be extracted may be the coordinate values defining the boundary of the above portion that the assumed light rays cannot reach on the virtual spherical surface.
  • segmental image determination unit 202 described here is merely illustrative and the present disclosure is not limited hereto.
  • the other processes that may determine the image that is blocked by the left side A pillar also may be used.
  • a virtual spherical surface a virtual cylinder surface
  • a virtual plane may also be used.
  • the difference between the Camera A’s view and the operator’s view may be compensated with the above virtual screen creation method. From this view, it is preferable to set the radius of the virtual sphere to be large since the larger the virtual sphere is, the smaller the above mentioned difference is.
  • the extraction and transformation unit 205 is configured to receive the data representing the segmental image to be extracted, extract the segmental image from the image projected on the virtual spherical surface, perform a transformation on the extracted segmental image, and transmit commands to the projector 105 so as to cause it to project the transformed segmental image onto the left side A pillar.
  • the transformation may be a scale transformation and/or a rotation transformation and/or a translation transformation or the like.
  • the transformation may compensate the difference between the position of the projector and the position of the left side A pillar. It is understood that this position difference is pre-determined and hence is known. With use of this transformation, the extracted segmental image may be appropriately projected on the left A pillar.
  • the posture of view compensation unit 203 is an optional component of the controller 104.
  • the posture of view compensation unit 203 is configured to receive the captured image of head of operator of the vehicle from the Camera B, and determine whether at least one of the position of view point and the direction of line of sight of the operator’s view changes, and if yes, transmit commands to the segmental image determination unit 202 to cause it to re-determine the segmental image to be extracted.
  • the direction of line of sight of the operator may vary during the driving.
  • the scenes of the outstanding surroundings will change accordingly.
  • the scenes blocked by, for example, the left side A pillar also will change. That is, the segmental image to be extracted will change accordingly.
  • the heights of the eyes of the operators vary from person to person.
  • the segmental image to be extracted will also change according to the heights of the eyes.
  • the image finally projected on the A pillar will match the outstanding surroundings well from the operator’s view. That is, the image finally projected on the A pillar will be continuous with the outstanding surroundings from the operator’s view.
  • the vibration compensation unit 204 also is an optional component of the controller 104.
  • the vibration compensation unit 204 is configured to receive the inertial data of the Camera B from the IMU 103, receive the image data from the Camera B, perform motion compensation on the image captured by the Camera B according to the inertial data, and transmit the image that is subjected to the compensation to the posture of view determination unit 201 or the posture of view compensation unit 203.
  • the common motion compensation algorithms such as Range-Doppler algorithm, autofocus algorithm or the like, may be used here to perform the motion compensation. Other motion compensation algorithms also may be used.
  • the posture of the operator’s view may be determined with higher accuracy by the posture of view determination unit 201. Then the segmental image to be extracted may be determined with higher accuracy, too. As a result, the image finally projected on the A pillar will be continuous with the outstanding surroundings from the operator’s view.
  • the frame rate of the camera generally is not high.
  • the Camera B can output 5 frames per second.
  • the definition of the image of the Camera B is not high, too.
  • the image of the Camera B may be compensated in accordance with the inertial data and the definition of the image of the Camera B may be improved.
  • the relevant information extraction unit 206 also is an optional component of the controller 104. After the segmental image is extracted, the relevant information therein may be further extracted.
  • the relevant information refers to information related to driving safety. For example, the relevant information may relate to an image of an adjacent pedestrian such as a kid standing on a skateboard, an image of an adjacent moving vehicle such as an approaching bicycle, and the like.
  • the reorganization and extraction of the relevant information may be achieved with known image reorganization and extraction technologies.
  • the projection of the projector 105 may be initiated merely when the relevant information is recognized. Thus the power consumption of the whole system may be reduced. And, the overload of information to the operator may be avoided. Further, by merely projecting the relevant information rather than projecting the whole segmental image, the power consumption also may be reduced.
  • the relevant information extraction unit 206 may be configured to, after extracting the segmental image, further extract relevant information from the extracted segmental image, and transmit the extracted relevant information to the extraction and transformation unit 205.
  • the extracted relevant information after being transformed, is projected to the left side A pillar.
  • the relevant information extraction unit 206 may further be configured to, generate an alerting message and transmit it together with the extracted relevant information to the extraction and transformation unit 205, such that the alerting message may be projected together with the relevant information.
  • the alerting message may be, for example, a red exclamatory mark, a flickering circular mark, some characters, or the like to be projected in association with the relevant information.
  • the alerting message may be an animation (s) .
  • the alerting message may be projected in association with the extracted segmental image after being transformed.
  • the projection will be user friendly. Further, the security may be enhanced.
  • an altering voice such as “Pay Attention Please” may also be vocalized at the same time.
  • the Camera A 101 is provided on the upper portion of the outside surface of the left A pillar, the Camera B 102, the IMU 103, the controller 104, the projector 105 are integrated into a TransA-PillarBox (Transparent A-Pillar Box) and this TransA-PillarBox is provided at the central console of the vehicle.
  • the Camera A 101 may capture an image of outside surroundings, which comprises the outside surroundings that could not be seen by the operator due to the blocking of the left side A pillar.
  • the Camera B 102 may capture the image of the head of the operator. Thus the posture of the operator’s view may be determined.
  • the image to be projected on the A pillar may be determined according to the posture of the operator’s view and further may be re-determined according to change of the posture of the operator’s view.
  • the image to be projected on the A pillar correspond to the actual image of the outside surroundings which could not be seen by the operator due to the blocking of the left side A pillar.
  • the relevant information may be projected, or the relevant information along with the altering message may be projected, or the relevant information along with the altering message may be projected and meanwhile the altering voice may be vocalized.
  • the system of the present disclosure can be an add-on system, which means it can be removed from the car easily. It does not need any redesign work for the internal components used for projection of the vehicle at all. This means cost-effective. Further, one advantage of the projection method is that the projection does not have to be very bright and is not required to have a high resolution and therefore it is possible to save cost.
  • Fig. 4 illustrates a flow chart showing a method 400 for a vehicle in accordance with an exemplary embodiment of the present disclosure.
  • the steps of the method 400 presented below are intended to be illustrative. In some embodiments, method may be accomplished with one or more additional steps not described, and/or without one or more of the steps discussed. Additionally, the order in which the steps of method are illustrated in FIG. 4 and described as below is not intended to be limiting. In some embodiments, method may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information) .
  • processing devices e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information.
  • the one or more processing devices may include one or more modules executing some or all of the steps of method in response to instructions stored electronically on an electronic storage medium.
  • the one or more processing modules may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the steps of method.
  • the method 400 is described under the case as shown in Fig. 3.
  • the method 400 starts from step 401, at which the apparatus 200, for example, provided in a vehicle starts up and begins to receive image data from the Camera A. Then, the received image data is transmitted to the segmental image determination unit 202.
  • the apparatus 200 may start up upon being requested by the operator, or may start up automatically when the vehicle is moving.
  • the apparatus 200 may be powered by a storage cell within the vehicle.
  • the apparatus 200 receives image data from the Camera B and transmits the received image data to the posture of view determination unit 201 or to the vibration compensation unit 204.
  • the vibration compensation unit 204 receives inertial data obtained by the IMU 103, receives the image data of the Camera B, and performs motion compensation on the image captured by the Camera B.
  • the vibration compensation unit 204 further transmits the image data that is subjected to the compensation to the posture of view determination unit 201 or the posture of view compensation unit 203.
  • the posture of view determination unit 201 receives the image data of the Camera B, determines the posture of the operator’s view based on the image data received from the Camera B, and transmits the data about the posture of the operator’s view to the segmental image determination unit 202.
  • the posture of view determination unit 201 receives the image data of the operator’ head from the vibration compensation unit 204.
  • the posture of view determination unit 201 determines the position of view point and a direction of line of sight of the operator based on the image captured by the Camera B. The process of the determination is discussed previously and is not repeated here again.
  • the segmental image determination unit 202 receives the image data captured by the Camera A and the data about the posture of the operator’s view, and determines the segmental image to be extracted from the image captured by the Camera A based on the posture of the operator’s view.
  • the process of the determination is as follows. First, a virtual spherical surface is created. Second, the image captured by the Camera A is projected on the virtual spherical surface. Third, a virtual cone representing the posture of the operator’s view is made to be intersected with the virtual spherical surface, so as to determine the image that should be seen by the operator. Fourth, the left side A Pillar is virtualized to be a rectangle.
  • the shape of the virtualized rectangle will change according to the operator’ view. Assume that the view point is a light source, and the light rays extend within the above cone. A portion on the virtual spherical surface which the assumed light rays cannot reach due to the blocking of the rectangle representing the A pillar may be determined as the segmental image to be extracted.
  • the extraction and transformation unit 205 receives the data regarding the segmental image to be extracted, extracts the segmental image determined at the step 405, performs a transformation on the extracted segmental image, and causes the projector to project the transformed image onto the A pillar.
  • step 406 it may further optionally comprise extracting relevant information from the extracted segmental image, transforming the relevant information, and causing the projector to project the transformed relevant information.
  • it may further comprise superimposing an alerting message on the transformed relevant information.
  • the alerting message may be superimposed onto the extracted segmental image.
  • the posture of view compensation unit 203 receives the image data captured by the Camera B or the image after being compensated, and determines whether the operator’s view changes. If yes, the method returns to the step 405 to re-determine the segmental image to be extracted. Or otherwise, the method ends.
  • Fig. 5 illustrates a general hardware environment 500 wherein the present disclosure is applicable in accordance with an exemplary embodiment of the present disclosure.
  • the computing device 500 may be any machine configured to perform processing and/or calculations, may be but is not limited to a work station, a server, a desktop computer, a laptop computer, a tablet computer, a personal data assistant, a smart phone, an on-vehicle computer or any combination thereof.
  • the aforementioned apparatus 200 may be wholly or at least partially implemented by the computing device 500 or a similar device or system.
  • the computing device 500 may comprise elements that are connected with or in communication with a bus 502, possibly via one or more interfaces.
  • the computing device 500 may comprise the bus 502, and one or more processors 504, one or more input devices 506 and one or more output devices 508.
  • the one or more processors 504 may be any kinds of processors, and may comprise but are not limited to one or more general-purpose processors and/or one or more special-purpose processors (such as special processing chips) .
  • the input devices 506 may be any kinds of devices that can input information to the computing device, and may comprise but are not limited to a mouse, a keyboard, a touch screen, a microphone and/or a remote control.
  • the output devices 508 may be any kinds of devices that can present information, and may comprise but are not limited to display, a speaker, a video/audio output terminal, a vibrator and/or a printer.
  • the computing device 500 may also comprise or be connected with non-transitory storage devices 510 which may be any storage devices that are non-transitory and can implement data stores, and may comprise but are not limited to a disk drive, an optical storage device, a solid-state storage, a floppy disk, a flexible disk, hard disk, a magnetic tape or any other magnetic medium, a compact disc or any other optical medium, a ROM (Read Only Memory) , a RAM (Random Access Memory) , a cache memory and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions and/or code.
  • non-transitory storage devices 510 may be any storage devices that are non-transitory and can implement data stores, and may comprise but are not limited to a disk drive, an optical storage device
  • the non-transitory storage devices 510 may be detachable from an interface.
  • the non-transitory storage devices 510 may have data/instructions/code for implementing the methods and steps which are described above.
  • the computing device 500 may also comprise a communication device 512.
  • the communication device 512 may be any kinds of device or system that can enable communication with external apparatuses and/or with a network, and may comprise but are not limited to a modem, a network card, an infrared communication device, a wireless communication device and/or a chipset such as a Bluetooth TM device, 1302.11 device, WiFi device, WiMax device, cellular communication facilities and/or the like.
  • the computing device 500 When the computing device 500 is used as an on-vehicle device, it may also be connected to external device, for example, a GPS receiver, sensors for sensing different environmental data such as an acceleration sensor, a wheel speed sensor, a gyroscope and so on.In this way, the computing device 500 may, for example, receive location data and sensor data indicating the travelling situation of the vehicle.
  • external device for example, a GPS receiver, sensors for sensing different environmental data such as an acceleration sensor, a wheel speed sensor, a gyroscope and so on.
  • the computing device 500 may, for example, receive location data and sensor data indicating the travelling situation of the vehicle.
  • other facilities such as an engine system, a wiper, an anti-lock Braking System or the like
  • non-transitory storage device 510 may have map information and software elements so that the processor 504 may perform route guidance processing.
  • the output device 506 may comprise a display for displaying the map, the location mark of the vehicle and also images indicating the travelling situation of the vehicle.
  • the output device 506 may also comprise a speaker or interface with an ear phone for audio guidance.
  • the bus 502 may include but is not limited to Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus. Particularly, for an on-vehicle device, the bus 502 may also include a Controller Area Network (CAN) bus or other architectures designed for application on an automobile.
  • ISA Industry Standard Architecture
  • MCA Micro Channel Architecture
  • EISA Enhanced ISA
  • VESA Video Electronics Standards Association
  • PCI Peripheral Component Interconnect
  • CAN Controller Area Network
  • the computing device 500 may also comprise a working memory 514, which may be any kind of working memory that may store instructions and/or data useful for the working of the processor 504, and may comprise but is not limited to a random access memory and/or a read-only memory device.
  • working memory 514 may be any kind of working memory that may store instructions and/or data useful for the working of the processor 504, and may comprise but is not limited to a random access memory and/or a read-only memory device.
  • Software elements may be located in the working memory 514, including but are not limited to an operating system 516, one or more application programs 518, drivers and/or other data and codes. Instructions for performing the methods and steps described in the above may be comprised in the one or more application programs 518, and the units of the aforementioned apparatus 200 may be implemented by the processor 504 reading and executing the instructions of the one or more application programs 518. More specifically, the posture of view determination unit 201 of the aforementioned apparatus 200 may, for example, be implemented by the processor 504 when executing an application 518 having instructions to perform the step S404.
  • segmental image determination unit 202 of the aforementioned apparatus 200 may, for example, be implemented by the processor 504 when executing an application 518 having instructions to perform the step S405.
  • Other units of the aforementioned apparatus 200 may also, for example, be implemented by the processor 504 when executing an application 518 having instructions to perform one or more of the aforementioned respective steps.
  • the executable codes or source codes of the instructions of the software elements may be stored in a non-transitory computer-readable storage medium, such as the storage device (s) 510 described above, and may be read into the working memory 514 possibly with compilation and/or installation.
  • the executable codes or source codes of the instructions of the software elements may also be downloaded from a remote location.
  • the present disclosure may be implemented by software with necessary hardware, or by hardware, firmware and the like. Based on such understanding, the embodiments of the present disclosure may be embodied in part in a software form.
  • the computer software may be stored in a readable storage medium such as a floppy disk, a hard disk, an optical disk or a flash memory of the computer.
  • the computer software comprises a series of instructions to make the computer (e.g., a personal computer, a service station or a network terminal) execute the method or a part thereof according to respective embodiment of the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Mechanical Engineering (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)
  • Studio Devices (AREA)

Abstract

A system, a method, and an apparatus for vehicle for projecting an image within the vehicle. The system for a vehicle comprises: a projecting device (105), a first image capturing device (101) configured to capture an image comprising an image of a blind spot, a second image capturing device (102) configured to capture an image of head of an operator of the vehicle, and a controller (104). The controller is configured to determine a posture of the operator's view based on the captured image of head of the operator; determine a segmental image to be extracted from the image captured by the first image capturing device based on the posture of the operator's view, wherein the segmental image corresponds to the image of the blind spot, and the posture of the operator's view comprises a position of view point and a direction of line of sight of the operator; extract the segmental image, perform a transformation on the extracted segmental image, and cause the projecting device to project the transformed segmental image onto an internal component of the vehicle. The system can display images of blind spots on an in-vehicle screen.

Description

SYSTEM, METHOD AND APPARATUS FOR VEHICLE AND COMPUTER READABLE MEDIUM FIELD OF THE INVENTION
The present disclosure relates in general to a field of projection technologies used for a vehicle, and in more particular, to a system, method, and apparatus for vehicle and computer readable medium for projecting an image within the vehicle.
BACKGROUND OF THE INVENTION
While operating a vehicle, it is very important to obtain a wide field of vision. However, the space available for setting up more windows is limited. Therefore, the vehicle has certain blind spots that may lead to accidents. Some studies have reported techniques for capturing these blind spots with cameras. In these techniques, images of blind spots are displayed on an in-vehicle screen.
In contrary to installation of the in-vehicle display, projecting technology has been proposed to project images of blind spots onto the inner wall of the vehicle.
SUMMARY OF THE INVENTION
The present disclosure aims to provide a new and improved system, apparatus, and method for a vehicle for projecting images of blind spots onto the inner wall of the vehicle.
In accordance with a first exemplary embodiment of the present disclosure, a system for a vehicle is provided, characterized in comprising: a projecting device, a first image capturing device configured to capture an image comprising an image of a blind spot, a second image capturing device configured to capture an image of head of an operator of the vehicle, and a controller. The controller is configured to: determine a posture of the operator’s view based on the captured image of head of the operator; determine a segmental image to be extracted from the image captured by the first image capturing device based on the posture of the operator’s view, wherein the segmental image corresponds to the image of the blind spot, and the posture of the operator’s view comprises a position of view point and a direction of line of sight of the operator; and extract the segmental image, perform a transformation on the extracted segmental image, and cause the projecting device to project the transformed segmental image onto an internal component of the vehicle.
In an example of the present embodiment, the controller may be further configured to: determine an occurrence of a change of the posture of the operator’s view based on the captured image of head of operator, and re-determine the segmental image to be extracted in response to the occurrence of the change.
In another example of the present embodiment, the change of the posture of the operator’s view may comprise at least one of a change of the position of view point and a change of the direction of line of sight.
In another example of the present embodiment, the system may further comprises an inertial measurement device, configured to measure inertial data of the second image capturing device, and wherein the controller may be further configured to perform motion compensation on the image captured by the second image capturing device according to the inertial data.
In another example of the present embodiment, determining a posture of the operator’s view may comprise: calculating values of distances among a plurality of feature points within the captured image of head of the operator, determining the position of view point according to the calculated values and based on a position of the second image capturing device, and, determining the direction of line of sight according to the calculated values.
In another example of the present embodiment, determining a segmental image to be extracted may comprise: creating a virtual screen surrounding the vehicle, projecting the image captured by the first image capturing device onto the virtual screen, and, determining a portion of the image projected onto the virtual screen, which cannot be seen in the direction of line of sight of the operator from the position of view point of the operator due to blocking of the internal component, as the segmental image to be extracted.
In another example of the present embodiment, the controller may be further configured to further extract relevant information from the extracted segmental image, perform the transformation on the extracted segmental image, and cause the projecting device to project the relevant information onto the internal component of the vehicle, wherein the relevant information relates to driving safety.
In another example of the present embodiment, the internal component may an A pillar, the first image capturing device may be provided on an outside surface of the A pillar, and the projecting device and the second image capturing device may be provided at a central console of the vehicle.
In accordance with a second exemplary embodiment of the present disclosure,  a computer-implemented method for a vehicle is provided, characterized in comprising: receiving, from a first image capturing device, an image comprising an image of a blind spot, receiving, from a second image capturing device, an image of head of an operator of the vehicle, determining a posture of the operator’s view based on the captured image of head of operator, determining a segmental image to be extracted from the image captured by the first image capturing device based on the posture of the operator’s view, wherein segmental image corresponds to the image of the blind spot, and the posture of the operator’s view comprises a position of view point and a direction of line of sight of the operator, and extracting the segmental image, performing a transformation on the extracted segmental image, and causing a projecting device to project the transformed segmental image onto an internal component of the vehicle.
In an example of the present embodiment, the method may further comprise: determining an occurrence of a change of the posture of the operator’s view based on the captured image of head of operator, and re-determining the segmental image to be extracted in response to the occurrence of the change.
In another example of the present embodiment, the change of the posture of the operator’s view may comprise at least one of a change of the position of view point and a change of the direction of line of sight.
In another example of the present embodiment, the method may further comprise: receiving, from an inertial measurement device, inertial data of the second image capturing device, and performing motion compensation on the image captured by the second image capturing device according to the inertial data.
In accordance with a third exemplary embodiment of the present disclosure, an apparatus for a vehicle is provided, characterized in comprising: a memory configured to store a series of computer executable instructions; and a processor configured to execute said series of computer executable instructions, wherein said series of computer executable instructions, when executed by the processor, cause the processor to perform operations of the steps of the above mentioned method.
In accordance with a fourth exemplary embodiment of the present disclosure, a non-transitory computer readable medium having instructions stored thereon that, when executed by a processor, causing the processor to perform the steps of the above mentioned method is provided.
BRIEF DESCRIPTION OF THE DRAWINGS
The above and other aspects and advantages of the present disclosure will become apparent from the following detailed description of exemplary embodiments taken in conjunction with the accompanying drawings which illustrate, by way of example, the principles of the present disclosure. Note that the drawings are not necessarily drawn to scale. 
Fig. 1 illustrates a block diagram of a system for a vehicle in accordance with an exemplary embodiment of the present disclosure.
Fig. 2 illustrates a block diagram of an apparatus for a vehicle (i.e., the controller as shown in Fig. 1) in accordance with an exemplary embodiment of the present disclosure.
Figs. 3 are diagram illustrating an example of the system in accordance with an exemplary embodiment of the present disclosure;
Fig. 4 illustrates a flow chart showing a process of determining an image to be projected on an A pillar according to the operator’s view in accordance with an exemplary embodiment of the present disclosure; and
Fig. 5 illustrates a general hardware environment wherein the present disclosure is applicable in accordance with an exemplary embodiment of the present disclosure.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
In the following detailed description, numerous specific details are set forth to provide a thorough understanding of the described exemplary embodiments. It will be apparent, however, to one skilled in the art that the described embodiments can be practiced without some or all of these specific details. In other exemplary embodiments, well known structures or process steps have not been described in detail in order to avoid unnecessarily obscuring the concept of the present disclosure.
The term “vehicle” used through the specification refers to a car, an airplane, a helicopter, a ship, or the like. The term “A or B” used through the specification refers to “A and B” and “A or B” rather than meaning that A and B are exclusive, unless otherwise specified.
Referring first to Fig. 1, there is shown a block diagram of a system 100 for a vehicle in accordance with an exemplary embodiment of the present disclosure. The system 100 comprises one or more first camera (s) 101 (corresponding to the first image capturing device) that may capture an image comprising an image of a blind spot, one or more second  camera (s) 102 (corresponding to the second image capturing device) that may capture an image of head of an operator of the vehicle, an inertial measurement unit (IMU) 103 (corresponding to the inertial measurement device) that may measure inertial data of the camera (s) 102, a controller 104 that may control an overall operation of the system 100, and a projector 105 (corresponding to the projecting device) that may project an image onto an internal component of the vehicle.
The first camera (s) 101 may be any kind of on-vehicle cameras that are known to those skilled in the art. The number of the first camera (s) 101 may be one, two, or more. The existing cameras provided on the vehicle may be used as the first camera (s) 101. The blind spot means the outside surroundings that cannot be seen from the operator’s view. Specifically, the blind spot means the outside surroundings that are blocked by an internal component of the vehicle and hence cannot be seen from the operator’s view. For example, in a case wherein the operator is driving the vehicle, the outside surroundings that are blocked by an A pillar, such as a left side A pillar, of the vehicle and hence cannot be seen from the operator’s view is a blind spot. For another example, in a case wherein the operator turns back his body and looks at the rear part of the vehicle and is reversing the vehicle, the outside surroundings that are blocked by the rear part of the vehicle and hence cannot be seen from the operator’s view is a blind spot. However, the blind spot is not limited to these examples. The camera (s) 101 are provided outside the vehicle so as to at least capture images of the outside surroundings that cannot be seen from the operator’s view. Note that the image of a wide range of vision will be captured by the camera (s) 101 in which the image of the blind spot is comprised. In the case wherein the outside surroundings are blocked by the left side A pillar, a camera may be provided on outside surface of the left side A pillar, and the view of this camera may substantially cover all possible views of the operator during his driving. The image data of the image captured by the camera (s) 101 may be transmitted to the controller via wire (s) or wirelessly.
The posture of the operator’s view may be defined by a position, such as a three-dimensional (3D) position, of view point and a direction of line of sight. The position of view point may be the positions of two eyes of the operator. But for simplification, through the present specification, the position of view point refers to the position of the midpoint of line segment connecting the positions of the two eyes of the operator. The direction of line of sight means a direction in which the operator looks. For example, the direction of the operator’s line of sight may reflect whether the operator is looking forward, looking up, looking down, looking left, looking right, or the like. More specifically, for example, the  direction of the operator’s line of sight may also reflect whether the operator is looking 30 degrees left from forward or 60 degrees left from forward. Further, under the consumption that the direction of line of sight is horizontally forward, the horizontal visible angle of the human’s two eyes is about 120 degrees, and the vertical visible angle of the human’s two eyes is about 60 degrees.
A cone, preferably a rectangular pyramid with four equal lateral edges, may be used to virtually represent the operator’s view. In particular, the apex of the cone may represent the position of view point. A straight line, which passes through the apex of the cone and is perpendicular to the bottom surface thereof, may represent the direction of the operator’s line of sight. Further, also under the consumption that the direction of line of sight is horizontally forward, the apex angle of a triangle formed by making the cone intersect with a horizontal plane that comprises the apex of the cone may represent the horizontal visible angle, and similarly, the apex angle of a triangle formed by making the cone intersect with a vertical plane that comprises the apex of the cone may represent the vertical visible angle.
The view of a camera or the view of a projector may be defined in a similar way, and hence may also be virtually represented with a cone similarly. The view of the camera (s) 102 is represented with a rectangular pyramid as shown in Fig. 3. Note that other volume shape may also be used to simulate human’s view.
The second camera (s) 102 may be any kind of on-vehicle cameras that are known to those skilled in the art. The number of the second camera (s) 102 may be one, two, or more. The existing cameras provided on the vehicle may be used as the second camera (s) 102. The second camera (s) 102 may be provided inside the vehicle so as to capture the image of the operator’s head. For example, a camera is provided at the central console of the vehicle as the second camera 102 so as to capture the image of the operator’s head when the operator is driving. The captured image of the operator’s head may be used to determine a posture of the operator’s view. In order to determine the posture of the operator’s view more accurately, a pair of cameras may be used as the camera (s) 102. The details of such determination will be described hereinafter. The image data of the image captured by the camera (s) 102 may be transmitted to the controller via wire (s) or wirelessly.
In this embodiment, the camera (s) are used as the first and second image capturing devices. But the present disclosure is not limited to this. One or more ultrasonic radar (s) , sonic radar (s) , and laser radar (s) may also be used as the first image capturing device or the second image capturing device. Any device that can capture an image and generate image data may be used as the first image capturing device or the second image  capturing device.
The IMU 103 may measure inertial data of the second camera (s) 102. In particular, the IMU may achieve the measurements on the acceleration and the angular velocity in six degrees of freedom. The measured inertial data may be used to perform motion compensation on the image captured by the second camera (s) 102. After the compensation, the definition of the image captured by the second camera (s) 102 may be significantly improved. The measured inertial data is transmitted to the controller 104 via wire (s) or wirelessly. Note that an IMU is used here as the inertial measurement device, but the present disclosure is not limited to this. Alternatively, a combination of an accelerator and a gyroscope may be used as the inertial measurement device. Any device that may obtain the inertial data may be used as the inertial measurement device. The IMU 103 may be provided anywhere on the vehicle, preferably be provided at the central console of the vehicle.
The controller 104 receives data from various components of the system 100, i.e., the first camera (s) 101, the second camera (s) 102, The IMU 103, and the projecting device 105. And, the controller 104 transmits control commands to the above-mentioned various components. In Fig. 1, a connection line with a bi-directional arrow between various components represents a bi-directional communication line, which may be tangible wires or may be achieved wirelessly, such as via radio, RF, or the like. The specific controlling operations performed by the controller 104 will be described in details with reference to Figs. 2-4 later. The controller 104 may be a processor, a microprocessor or the like. The controller 104 may be provided on the vehicle, for example, at the central console of the vehicle. Alternatively, the controller 104 may be provided remotely and may be accessed via various networks or the like.
The projector 105 may be a Cathode Ray Tube (CRT) projector, a Liquid Crystal Display (LCD) projector, a Digital Light Processor (DLP) projector or the like. Note that, the projector 105 is used here as the projecting device, but the present disclosure is not limited to this. Other devices that can project an image onto a certain internal component of the vehicle, such as a combination of light source (s) and a series of lens and mirrors, may also be used as the projecting device. A reflective material such as a retro-reflector may or may not be applied on an internal component of the vehicle, to which the image of the blind spot is to be projected. For example, the internal component is the left side A pillar. In one embodiment of the present disclosure, the reflective material is not applied, and the image of the blind spot is directly projected to inner surface of the left side A pillar of the vehicle. The projection can be adapted not only in the granularity, but also in the intensity of the projection.  The projecting device 105 may be also provided at the central console of the vehicle, in order to project the image of the blind spot onto, for example, the left side A pillar of the vehicle.
The types, numbers, and locations of the first camera (s) 101, the second camera (s) 102, the IMU 103, and the projector 105 are described in detail. But as can be easily understood by those skilled in the art, the types, numbers, and locations of the above components are not limited to the illustrated embodiment, and other types, numbers, and locations may be also used according to the actual requirements.
Fig. 2 illustrates a block diagram of an apparatus 200 for a vehicle (i.e., the controller 104 as shown in Fig. 1) in accordance with an exemplary embodiment of the present disclosure. The blocks of the apparatus 200 may be implemented by hardware, software, firmware, or any combination thereof to carry out the principles of the present disclosure. It is understood by those skilled in the art that the blocks described in Fig. 2 may be combined or separated into sub-blocks to implement the principles of the present disclosure as described above. Therefore, the description herein may support any possible combination or separation or further definition of the blocks described herein.
Referring to Fig. 2, the apparatus 200 for a vehicle may include a posture of view determination unit 201, a segmental image determination unit 202, a posture of view compensation unit 203 (optional) , a vibration compensation unit 204 (optional) , an extraction and transformation unit 205, and a relevant information extraction unit 206 (optional) . Although it is not illustrated, the apparatus 200 may further comprises a reception unit and a transmission unit for receiving and transmitting information, instructions, or the like, respectively.
The posture of view determination unit 201 may be configured to receive the image of head of operator of the vehicle captured by the second camera (s) 102 (hereinafter being referred to as the Camera B) , determine a posture of the operator’s view based on the received image, and output the data representing the posture of the operator’s view to the segmental image determination unit 202. The data representing the posture of the operator’s view are, for example, the 3D positions of the two eyes of the operator and the direction of the line of sight of the operator. In one embodiment of the present disclosure, such data may be calculated based on the image processing on the received image of head of the operator. In particular, values of distances among a plurality of feature points within the captured image of head of the operator may be calculated, the position of view point may be determined according to the calculated values and based on a position of the Camera B, and, the direction of line of sight may be determined according to the calculated values. The distances among a  plurality of feature points within the captured image of head of the operator may be the distance between two eyes, the distance between two ears, the distance between one eye and the tip of the nose, and/or the like. Based on the values of such distances, the position of view point may be determined with use of the known 3D position of the Camera B and a known knowledge base wherein statistics as to the distances among the feature points are stored. Alternatively, based on the values of such distances among a plurality of images of head of the operator (or the variations of the values of such distances among a plurality of images of head of the operator) , the position of view point may be determined with use of the known 3D position of the Camera B and a known binocular vision algorithm or stereoscopic vision algorithm. Further, based on the values of such distances, the orientation of the face of the operator may be calculated, and then the direction of the line of sight of the operator, which may be consistent with the orientation of the face, may be determined accordingly. Any known image processing algorithms may be used for calculating the posture of the operator’s view. Alternatively, an eye tracker may be used to acquire the posture of the operator’s view. In such a case, the posture of view determination unit 201 may directly receive the data representing the posture of the operator’s view from the eye tracker.
Instead of calculating the posture of the operator’s view, the posture of the operator’s view may be looked up in a pre-stored table based on, for example, the above mentioned calculated distances.
The segmental image determination unit 202 may be configured to receive the data representing the posture of the operator’s view from the posture of view determination unit 201, determine a segmental image to be extracted from the image captured by the first camera (s) 101 (hereinafter being referred to as the Camera A) based on the posture of the operator’s view, and output the data representing the segmental image to be extracted to the extraction and transformation unit 205.
Next, the operations of the segmental image determination unit 202 will be described in details. The following descriptions will be given under the consumption that an image of a blind spot is projected onto the left side A pillar of the vehicle. In such a case, the Camera A may be provided on the outside surface of the left side A pillar. In particular, the height and the direction of the Camera A’s view may be arranged to cover all possible views of the operator during his driving. The operators’ views are different from person to person, but the statistics of the operators’ views will be considered to decide the height and the direction of the Camera A’s view. Fig. 3 illustrates a case wherein the image of the blind spot is projected onto the left side A pillar of the vehicle. Fig. 3 will be described in details later.
Throughout the description, although the left side A pillar is considered, it can be understand that the image may be projected to the right side A pillar of the vehicle, both of the A pillars, one or both of the B pillars, one or both of the C pillars, the rear part of the vehicle, or the like in accordance with the same principles.
First, a virtual spherical surface (corresponding to the virtual screen) is created to surround the vehicle. The center of the sphere may locate at the position of view point of the operator, and the radius of the sphere may be arbitrary as long as the sphere may at least surround the left front part of the vehicle (in the case wherein the left side A pillar is discussed as mentioned above) . Second, the image comprising image of blind spot captured by the Camera A is virtually projected onto the virtual spherical surface from the 3D position of the Camera A. Third, a portion of the image projected onto the virtual spherical surface, which cannot be seen in the direction of line of sight of the operator from the position of view point of the operator due to blocking of an internal component such as the left side A pillar, is determined as the segmental image to be extracted. In particular, a virtual cone simulating the posture of the operator’s view is projected onto the virtual spherical surface. Assume that the view point is a light source, the light rays will extend within the cone and illuminate a portion of the image projected on the virtual spherical surface. In the case wherein the left A pillar is the blocking component, the left side A Pillar may be substantially virtualized to be a rectangle. It can be understood that the shape of the virtualized rectangle will change according to the operator’ view. Further, a portion of the image projected on the virtual spherical surface which the assumed light rays cannot reach due to the blocking of such a rectangle may be determined as the segmental image to be extracted.
On other words, the process of the above determination comprises: creating a virtual screen, projecting the image captured by the Camera A onto the virtual screen, determining the segmental image on the virtual screen that cannot be seen due to the blocking of the A pillar at the operator’s view point position and with the direction of the operator’s view.
The data representing the segmental image to be extracted may be the coordinate values defining the boundary of the above portion that the assumed light rays cannot reach on the virtual spherical surface.
Note that, the operations of the segmental image determination unit 202 described here is merely illustrative and the present disclosure is not limited hereto. The other processes that may determine the image that is blocked by the left side A pillar also may be used. Instead of use of a virtual spherical surface, a virtual cylinder surface, a virtual plane  may also be used.
The difference between the Camera A’s view and the operator’s view may be compensated with the above virtual screen creation method. From this view, it is preferable to set the radius of the virtual sphere to be large since the larger the virtual sphere is, the smaller the above mentioned difference is.
The extraction and transformation unit 205 is configured to receive the data representing the segmental image to be extracted, extract the segmental image from the image projected on the virtual spherical surface, perform a transformation on the extracted segmental image, and transmit commands to the projector 105 so as to cause it to project the transformed segmental image onto the left side A pillar.
Here, the transformation may be a scale transformation and/or a rotation transformation and/or a translation transformation or the like. The transformation may compensate the difference between the position of the projector and the position of the left side A pillar. It is understood that this position difference is pre-determined and hence is known. With use of this transformation, the extracted segmental image may be appropriately projected on the left A pillar.
The posture of view compensation unit 203 is an optional component of the controller 104. The posture of view compensation unit 203 is configured to receive the captured image of head of operator of the vehicle from the Camera B, and determine whether at least one of the position of view point and the direction of line of sight of the operator’s view changes, and if yes, transmit commands to the segmental image determination unit 202 to cause it to re-determine the segmental image to be extracted.
It can be understood that, the direction of line of sight of the operator may vary during the driving. In a case wherein the operator turns left his head, the scenes of the outstanding surroundings will change accordingly. Then the scenes blocked by, for example, the left side A pillar also will change. That is, the segmental image to be extracted will change accordingly. Similarly, for example, the heights of the eyes of the operators vary from person to person. The segmental image to be extracted will also change according to the heights of the eyes.
By taking the position of view point and the direction of line of sight of the operator and further the changes thereof into consideration, the image finally projected on the A pillar will match the outstanding surroundings well from the operator’s view. That is, the image finally projected on the A pillar will be continuous with the outstanding surroundings from the operator’s view.
The vibration compensation unit 204 also is an optional component of the controller 104. The vibration compensation unit 204 is configured to receive the inertial data of the Camera B from the IMU 103, receive the image data from the Camera B, perform motion compensation on the image captured by the Camera B according to the inertial data, and transmit the image that is subjected to the compensation to the posture of view determination unit 201 or the posture of view compensation unit 203. The common motion compensation algorithms, such as Range-Doppler algorithm, autofocus algorithm or the like, may be used here to perform the motion compensation. Other motion compensation algorithms also may be used.
With use of the image of the head of the operator after being compensated, the posture of the operator’s view may be determined with higher accuracy by the posture of view determination unit 201. Then the segmental image to be extracted may be determined with higher accuracy, too. As a result, the image finally projected on the A pillar will be continuous with the outstanding surroundings from the operator’s view.
As known, the frame rate of the camera generally is not high. For example, the Camera B can output 5 frames per second. In addition, in view of the vibration of the vehicle, the definition of the image of the Camera B is not high, too. In view of this, with use of the IMU 103, which has a high refresh rate such as 10,000 times per second, the image of the Camera B may be compensated in accordance with the inertial data and the definition of the image of the Camera B may be improved.
The relevant information extraction unit 206 also is an optional component of the controller 104. After the segmental image is extracted, the relevant information therein may be further extracted. The relevant information refers to information related to driving safety. For example, the relevant information may relate to an image of an adjacent pedestrian such as a kid standing on a skateboard, an image of an adjacent moving vehicle such as an approaching bicycle, and the like. The reorganization and extraction of the relevant information may be achieved with known image reorganization and extraction technologies. The projection of the projector 105 may be initiated merely when the relevant information is recognized. Thus the power consumption of the whole system may be reduced. And, the overload of information to the operator may be avoided. Further, by merely projecting the relevant information rather than projecting the whole segmental image, the power consumption also may be reduced.
The relevant information extraction unit 206 may be configured to, after extracting the segmental image, further extract relevant information from the extracted  segmental image, and transmit the extracted relevant information to the extraction and transformation unit 205. The extracted relevant information, after being transformed, is projected to the left side A pillar. The relevant information extraction unit 206 may further be configured to, generate an alerting message and transmit it together with the extracted relevant information to the extraction and transformation unit 205, such that the alerting message may be projected together with the relevant information. The alerting message may be, for example, a red exclamatory mark, a flickering circular mark, some characters, or the like to be projected in association with the relevant information. The alerting message may be an animation (s) . Alternatively, the alerting message may be projected in association with the extracted segmental image after being transformed. With projection of the alerting message, the projection will be user friendly. Further, the security may be enhanced. In some implementations of the invention, along with the projection of the alerting message, an altering voice such as “Pay Attention Please” may also be vocalized at the same time.
Next, a specific example of the system 100 will be described with reference to Fig. 3. As shown in Fig. 3, the Camera A 101 is provided on the upper portion of the outside surface of the left A pillar, the Camera B 102, the IMU 103, the controller 104, the projector 105 are integrated into a TransA-PillarBox (Transparent A-Pillar Box) and this TransA-PillarBox is provided at the central console of the vehicle. As also shown in Fig. 3, the Camera A 101 may capture an image of outside surroundings, which comprises the outside surroundings that could not be seen by the operator due to the blocking of the left side A pillar. The Camera B 102 may capture the image of the head of the operator. Thus the posture of the operator’s view may be determined. As mentioned previously, the image to be projected on the A pillar may be determined according to the posture of the operator’s view and further may be re-determined according to change of the posture of the operator’s view. As can be understood, the image to be projected on the A pillar correspond to the actual image of the outside surroundings which could not be seen by the operator due to the blocking of the left side A pillar. Further, instead of projecting the extracted segmental image, the relevant information may be projected, or the relevant information along with the altering message may be projected, or the relevant information along with the altering message may be projected and meanwhile the altering voice may be vocalized.
The system of the present disclosure can be an add-on system, which means it can be removed from the car easily. It does not need any redesign work for the internal components used for projection of the vehicle at all. This means cost-effective. Further, one advantage of the projection method is that the projection does not have to be very bright and  is not required to have a high resolution and therefore it is possible to save cost.
Fig. 4 illustrates a flow chart showing a method 400 for a vehicle in accordance with an exemplary embodiment of the present disclosure. The steps of the method 400 presented below are intended to be illustrative. In some embodiments, method may be accomplished with one or more additional steps not described, and/or without one or more of the steps discussed. Additionally, the order in which the steps of method are illustrated in FIG. 4 and described as below is not intended to be limiting. In some embodiments, method may be implemented in one or more processing devices (e.g., a digital processor, an analog processor, a digital circuit designed to process information, an analog circuit designed to process information, a state machine, and/or other mechanisms for electronically processing information) . The one or more processing devices may include one or more modules executing some or all of the steps of method in response to instructions stored electronically on an electronic storage medium. The one or more processing modules may include one or more devices configured through hardware, firmware, and/or software to be specifically designed for execution of one or more of the steps of method.
The method 400 is described under the case as shown in Fig. 3.
The method 400 starts from step 401, at which the apparatus 200, for example, provided in a vehicle starts up and begins to receive image data from the Camera A. Then, the received image data is transmitted to the segmental image determination unit 202. The apparatus 200 may start up upon being requested by the operator, or may start up automatically when the vehicle is moving. The apparatus 200 may be powered by a storage cell within the vehicle.
At step 402, the apparatus 200 receives image data from the Camera B and transmits the received image data to the posture of view determination unit 201 or to the vibration compensation unit 204.
At step 403, which is optional, the vibration compensation unit 204 receives inertial data obtained by the IMU 103, receives the image data of the Camera B, and performs motion compensation on the image captured by the Camera B. The vibration compensation unit 204 further transmits the image data that is subjected to the compensation to the posture of view determination unit 201 or the posture of view compensation unit 203.
At step 404, the posture of view determination unit 201 receives the image data of the Camera B, determines the posture of the operator’s view based on the image data received from the Camera B, and transmits the data about the posture of the operator’s view to the segmental image determination unit 202. Alternatively, if the motion compensation is  performed, the posture of view determination unit 201 receives the image data of the operator’ head from the vibration compensation unit 204. In particular, the posture of view determination unit 201 determines the position of view point and a direction of line of sight of the operator based on the image captured by the Camera B. The process of the determination is discussed previously and is not repeated here again.
At step 405, the segmental image determination unit 202 receives the image data captured by the Camera A and the data about the posture of the operator’s view, and determines the segmental image to be extracted from the image captured by the Camera A based on the posture of the operator’s view. The process of the determination is as follows. First, a virtual spherical surface is created. Second, the image captured by the Camera A is projected on the virtual spherical surface. Third, a virtual cone representing the posture of the operator’s view is made to be intersected with the virtual spherical surface, so as to determine the image that should be seen by the operator. Fourth, the left side A Pillar is virtualized to be a rectangle. It can be understood that the shape of the virtualized rectangle will change according to the operator’ view. Assume that the view point is a light source, and the light rays extend within the above cone. A portion on the virtual spherical surface which the assumed light rays cannot reach due to the blocking of the rectangle representing the A pillar may be determined as the segmental image to be extracted.
At step 406, the extraction and transformation unit 205 receives the data regarding the segmental image to be extracted, extracts the segmental image determined at the step 405, performs a transformation on the extracted segmental image, and causes the projector to project the transformed image onto the A pillar.
Regarding the step 406, it may further optionally comprise extracting relevant information from the extracted segmental image, transforming the relevant information, and causing the projector to project the transformed relevant information. In addition, it may further comprise superimposing an alerting message on the transformed relevant information. Alternatively, the alerting message may be superimposed onto the extracted segmental image. 
At step 407, which is optional, the posture of view compensation unit 203 receives the image data captured by the Camera B or the image after being compensated, and determines whether the operator’s view changes. If yes, the method returns to the step 405 to re-determine the segmental image to be extracted. Or otherwise, the method ends.
Fig. 5 illustrates a general hardware environment 500 wherein the present disclosure is applicable in accordance with an exemplary embodiment of the present disclosure.
With reference to FIG. 5, a computing device 500, which is an example of the hardware device that may be applied to the aspects of the present disclosure, will now be described. The computing device 500 may be any machine configured to perform processing and/or calculations, may be but is not limited to a work station, a server, a desktop computer, a laptop computer, a tablet computer, a personal data assistant, a smart phone, an on-vehicle computer or any combination thereof. The aforementioned apparatus 200 may be wholly or at least partially implemented by the computing device 500 or a similar device or system.
The computing device 500 may comprise elements that are connected with or in communication with a bus 502, possibly via one or more interfaces. For example, the computing device 500 may comprise the bus 502, and one or more processors 504, one or more input devices 506 and one or more output devices 508. The one or more processors 504 may be any kinds of processors, and may comprise but are not limited to one or more general-purpose processors and/or one or more special-purpose processors (such as special processing chips) . The input devices 506 may be any kinds of devices that can input information to the computing device, and may comprise but are not limited to a mouse, a keyboard, a touch screen, a microphone and/or a remote control. The output devices 508 may be any kinds of devices that can present information, and may comprise but are not limited to display, a speaker, a video/audio output terminal, a vibrator and/or a printer. The computing device 500 may also comprise or be connected with non-transitory storage devices 510 which may be any storage devices that are non-transitory and can implement data stores, and may comprise but are not limited to a disk drive, an optical storage device, a solid-state storage, a floppy disk, a flexible disk, hard disk, a magnetic tape or any other magnetic medium, a compact disc or any other optical medium, a ROM (Read Only Memory) , a RAM (Random Access Memory) , a cache memory and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions and/or code. The non-transitory storage devices 510 may be detachable from an interface. The non-transitory storage devices 510 may have data/instructions/code for implementing the methods and steps which are described above. The computing device 500 may also comprise a communication device 512. The communication device 512 may be any kinds of device or system that can enable communication with external apparatuses and/or with a network, and may comprise but are not limited to a modem, a network card, an infrared communication device, a wireless communication device and/or a chipset such as a BluetoothTM device, 1302.11 device, WiFi device, WiMax device, cellular communication facilities and/or the like.
When the computing device 500 is used as an on-vehicle device, it may also  be connected to external device, for example, a GPS receiver, sensors for sensing different environmental data such as an acceleration sensor, a wheel speed sensor, a gyroscope and so on.In this way, the computing device 500 may, for example, receive location data and sensor data indicating the travelling situation of the vehicle. When the computing device 500 is used as an on-vehicle device, it may also be connected to other facilities (such as an engine system, a wiper, an anti-lock Braking System or the like) for controlling the traveling and operation of the vehicle.
In addition, the non-transitory storage device 510 may have map information and software elements so that the processor 504 may perform route guidance processing. In addition, the output device 506 may comprise a display for displaying the map, the location mark of the vehicle and also images indicating the travelling situation of the vehicle. The output device 506 may also comprise a speaker or interface with an ear phone for audio guidance.
The bus 502 may include but is not limited to Industry Standard Architecture (ISA) bus, Micro Channel Architecture (MCA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus. Particularly, for an on-vehicle device, the bus 502 may also include a Controller Area Network (CAN) bus or other architectures designed for application on an automobile.
The computing device 500 may also comprise a working memory 514, which may be any kind of working memory that may store instructions and/or data useful for the working of the processor 504, and may comprise but is not limited to a random access memory and/or a read-only memory device.
Software elements may be located in the working memory 514, including but are not limited to an operating system 516, one or more application programs 518, drivers and/or other data and codes. Instructions for performing the methods and steps described in the above may be comprised in the one or more application programs 518, and the units of the aforementioned apparatus 200 may be implemented by the processor 504 reading and executing the instructions of the one or more application programs 518. More specifically, the posture of view determination unit 201 of the aforementioned apparatus 200 may, for example, be implemented by the processor 504 when executing an application 518 having instructions to perform the step S404. In addition, the segmental image determination unit 202 of the aforementioned apparatus 200 may, for example, be implemented by the processor 504 when executing an application 518 having instructions to perform the step S405. Other units of the aforementioned apparatus 200 may also, for example, be implemented by the  processor 504 when executing an application 518 having instructions to perform one or more of the aforementioned respective steps. The executable codes or source codes of the instructions of the software elements may be stored in a non-transitory computer-readable storage medium, such as the storage device (s) 510 described above, and may be read into the working memory 514 possibly with compilation and/or installation. The executable codes or source codes of the instructions of the software elements may also be downloaded from a remote location.
Those skilled in the art may clearly know from the above embodiments that the present disclosure may be implemented by software with necessary hardware, or by hardware, firmware and the like. Based on such understanding, the embodiments of the present disclosure may be embodied in part in a software form. The computer software may be stored in a readable storage medium such as a floppy disk, a hard disk, an optical disk or a flash memory of the computer. The computer software comprises a series of instructions to make the computer (e.g., a personal computer, a service station or a network terminal) execute the method or a part thereof according to respective embodiment of the present disclosure.
The present disclosure being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the present disclosure, and all such modifications as would be obvious to those skilled in the art are intended to be included within the scope of the following claims.

Claims (17)

  1. A system for a vehicle, characterized in comprising:
    a projecting device,
    a first image capturing device configured to capture an image comprising an image of a blind spot,
    a second image capturing device configured to capture an image of head of an operator of the vehicle, and
    a controller configured to
    determine a posture of the operator’s view based on the captured image of head of the operator,
    determine a segmental image to be extracted from the image captured by the first image capturing device based on the posture of the operator’s view, wherein the segmental image corresponds to the image of the blind spot, and the posture of the operator’s view comprises a position of view point and a direction of line of sight of the operator, and
    extract the segmental image, perform a transformation on the extracted segmental image, and cause the projecting device to project the transformed segmental image onto an internal component of the vehicle.
  2. The system of claim 1, wherein the controller is further configured to:
    determine an occurrence of a change of the posture of the operator’s view based on the captured image of head of operator, and
    re-determine the segmental image to be extracted in response to the occurrence of the change.
  3. The system of claim 2, wherein the change of the posture of the operator’s view comprises at least one of a change of the position of view point and a change of the direction of line of sight.
  4. The system of claim 1, further comprising:
    an inertial measurement device, configured to measure inertial data of the second image capturing device, and
    wherein the controller is further configured to perform motion compensation on the  image captured by the second image capturing device according to the inertial data.
  5. The system of claim 1, wherein determining a posture of the operator’s view comprises: calculating values of distances among a plurality of feature points within the captured image of head of the operator, determining the position of view point according to the calculated values and based on a position of the second image capturing device, and, determining the direction of line of sight according to the calculated values.
  6. The system of claim 1, wherein determining a segmental image to be extracted comprising: creating a virtual screen surrounding the vehicle, projecting the image captured by the first image capturing device onto the virtual screen, and, determining a portion of the image projected onto the virtual screen, which cannot be seen in the direction of line of sight of the operator from the position of view point of the operator due to blocking of the internal component, as the segmental image to be extracted.
  7. The system of claim 1, wherein the controller is further configured to further extract relevant information from the extracted segmental image, perform the transformation on the extracted segmental image, and cause the projecting device to project the relevant information onto the internal component of the vehicle, wherein the relevant information relates to driving safety.
  8. The system of claim 1, wherein the internal component is an A pillar, and wherein the first image capturing device is provided on an outside surface of the A pillar, and the projecting device and the second image capturing device are provided at a central console of the vehicle.
  9. A computer-implemented method for a vehicle, characterized in comprising:
    receiving, from a first image capturing device, an image comprising an image of a blind spot,
    receiving, from a second image capturing device, an image of head of an operator of the vehicle,
    determining a posture of the operator’s view based on the captured image of head of operator,
    determining a segmental image to be extracted from the image captured by the first  image capturing device based on the posture of the operator’s view, wherein segmental image corresponds to the image of the blind spot, and the posture of the operator’s view comprises a position of view point and a direction of line of sight of the operator, and
    extracting the segmental image, performing a transformation on the extracted segmental image, and causing a projecting device to project the transformed segmental image onto an internal component of the vehicle.
  10. The method of claim 9, further comprising:
    determining an occurrence of a change of the posture of the operator’s view based on the captured image of head of operator, and
    re-determining the segmental image to be extracted in response to the occurrence of the change.
  11. The method of claim 10, wherein the change of the posture of the operator’s view comprises at least one of a change of the position of view point and a change of the direction of line of sight.
  12. The method of claim 9, further comprising:
    receiving, from an inertial measurement device, inertial data of the second image capturing device, and
    performing motion compensation on the image captured by the second image capturing device according to the inertial data.
  13. An apparatus for a vehicle, characterized in comprising:
    a memory configured to store a series of computer executable instructions; and
    a processor configured to execute said series of computer executable instructions,
    wherein said series of computer executable instructions, when executed by the processor, cause the processor to perform operations of:
    receiving, from a first image capturing device, an image comprising an image of a blind spot,
    receiving, from a second image capturing device, an image of head of an operator of the vehicle,
    determining a posture of the operator’s view based on the captured image of head of operator,
    determining a segmental image to be extracted from the image captured by the first image capturing device based on the posture of the operator’s view, wherein segmental image corresponds to the image of the blind spot, and the posture of the operator’s view comprises a position of view point and a direction of line of sight of the operator, and
    extracting the segmental image, performing a transformation on the extracted segmental image, and causing a projecting device to project the transformed segmental image onto an internal component of the vehicle.
  14. The apparatus of claim 13, wherein said series of computer executable instructions, when executed by the processor, cause the processor to further perform operations of:
    determining an occurrence of a change of the posture of the operator’s view based on the captured image of head of operator of the vehicle, and
    re-determining the segmental image to be extracted in response to the occurrence of the change.
  15. The apparatus of claim 14, wherein the change of the posture of the operator’s view comprises at least one of a change of the position of view point and a change of the direction of line of sight.
  16. The apparatus of claim 13, wherein said series of computer executable instructions, when executed by the processor, cause the processor to further perform operations of:
    receiving, from an inertial measurement device, inertial data of the second image capturing device, and
    performing motion compensation on the image captured by the second image capturing device according to the inertial data.
  17. A non-transitory computer readable medium having instructions stored thereon that, when executed by a processor, causing the processor to perform the method of any one of claims 9-12.
PCT/CN2015/086439 2015-08-10 2015-08-10 System, method and apparatus for vehicle and computer readable medium WO2017024458A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201580081763.6A CN107848460A (en) 2015-08-10 2015-08-10 For the system of vehicle, method and apparatus and computer-readable medium
PCT/CN2015/086439 WO2017024458A1 (en) 2015-08-10 2015-08-10 System, method and apparatus for vehicle and computer readable medium
EP15900656.8A EP3334621A4 (en) 2015-08-10 2015-08-10 System, method and apparatus for vehicle and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2015/086439 WO2017024458A1 (en) 2015-08-10 2015-08-10 System, method and apparatus for vehicle and computer readable medium

Publications (1)

Publication Number Publication Date
WO2017024458A1 true WO2017024458A1 (en) 2017-02-16

Family

ID=57982868

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/086439 WO2017024458A1 (en) 2015-08-10 2015-08-10 System, method and apparatus for vehicle and computer readable medium

Country Status (3)

Country Link
EP (1) EP3334621A4 (en)
CN (1) CN107848460A (en)
WO (1) WO2017024458A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110349472B (en) * 2018-04-02 2021-08-06 北京五一视界数字孪生科技股份有限公司 Virtual steering wheel and real steering wheel butt joint method in virtual driving application
CN108340836B (en) * 2018-04-13 2024-07-16 华域视觉科技(上海)有限公司 Automobile A column display system
CN111731187A (en) * 2020-06-19 2020-10-02 杭州视为科技有限公司 Automobile A-pillar blind area image display system and method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006027563A1 (en) 2004-09-06 2006-03-16 Mch Technology Limited View enhancing system for a vehicle
US20060132613A1 (en) 2004-12-17 2006-06-22 Samsung Electronics Co., Ltd Optical image stabilizer for camera lens assembly
JP2006290304A (en) 2005-04-14 2006-10-26 Aisin Aw Co Ltd Method and device for displaying outside of vehicle
US20070081262A1 (en) 2005-10-07 2007-04-12 Nissan Motor Co., Ltd. Blind spot image display apparatus and method thereof for vehicle
CN101106703A (en) * 2006-07-12 2008-01-16 爱信艾达株式会社 Driving support method and apparatus
CN101277432A (en) * 2007-03-26 2008-10-01 爱信艾达株式会社 Driving support method and driving support apparatus
US20090086019A1 (en) * 2007-10-02 2009-04-02 Aisin Aw Co., Ltd. Driving support device, driving support method and computer program
CN103987578A (en) * 2011-10-14 2014-08-13 大陆汽车系统公司 Virtual display system for a vehicle
WO2015015928A1 (en) * 2013-07-30 2015-02-05 Toyota Jidosha Kabushiki Kaisha Driving assist device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1988488A1 (en) * 2007-05-03 2008-11-05 Sony Deutschland Gmbh Method for detecting moving objects in a blind spot region of a vehicle and blind spot detection device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006027563A1 (en) 2004-09-06 2006-03-16 Mch Technology Limited View enhancing system for a vehicle
US20060132613A1 (en) 2004-12-17 2006-06-22 Samsung Electronics Co., Ltd Optical image stabilizer for camera lens assembly
JP2006290304A (en) 2005-04-14 2006-10-26 Aisin Aw Co Ltd Method and device for displaying outside of vehicle
US20070081262A1 (en) 2005-10-07 2007-04-12 Nissan Motor Co., Ltd. Blind spot image display apparatus and method thereof for vehicle
CN101106703A (en) * 2006-07-12 2008-01-16 爱信艾达株式会社 Driving support method and apparatus
CN101277432A (en) * 2007-03-26 2008-10-01 爱信艾达株式会社 Driving support method and driving support apparatus
US20090086019A1 (en) * 2007-10-02 2009-04-02 Aisin Aw Co., Ltd. Driving support device, driving support method and computer program
CN103987578A (en) * 2011-10-14 2014-08-13 大陆汽车系统公司 Virtual display system for a vehicle
WO2015015928A1 (en) * 2013-07-30 2015-02-05 Toyota Jidosha Kabushiki Kaisha Driving assist device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3334621A4

Also Published As

Publication number Publication date
EP3334621A4 (en) 2019-03-13
EP3334621A1 (en) 2018-06-20
CN107848460A (en) 2018-03-27

Similar Documents

Publication Publication Date Title
US9563981B2 (en) Information processing apparatus, information processing method, and program
JP6524422B2 (en) Display control device, display device, display control program, and display control method
US8536995B2 (en) Information display apparatus and information display method
EP4339938A1 (en) Projection method and apparatus, and vehicle and ar-hud
US10843686B2 (en) Augmented reality (AR) visualization of advanced driver-assistance system
JP5962594B2 (en) In-vehicle display device and program
JP6304628B2 (en) Display device and display method
JP6695049B2 (en) Display device and display control method
US20180056861A1 (en) Vehicle-mounted augmented reality systems, methods, and devices
JPWO2016067574A1 (en) Display control apparatus and display control program
US20130289875A1 (en) Navigation apparatus
TWI522257B (en) Vehicle safety system and operating method thereof
US10922976B2 (en) Display control device configured to control projection device, display control method for controlling projection device, and vehicle
KR102052405B1 (en) A display control device using vehicles and user motion recognition and its method of operation
US11626028B2 (en) System and method for providing vehicle function guidance and virtual test-driving experience based on augmented reality content
KR20190078664A (en) Method and apparatus for displaying content
CN111033607A (en) Display system, information presentation system, control method for display system, program, and moving object
WO2017024458A1 (en) System, method and apparatus for vehicle and computer readable medium
JP6186905B2 (en) In-vehicle display device and program
JP7127565B2 (en) Display control device and display control program
KR101611167B1 (en) Driving one's view support device
US10876853B2 (en) Information presentation device, information presentation method, and storage medium
JP2018206210A (en) Collision accident suppression system and collision accident suppression method
JP7143728B2 (en) Superimposed image display device and computer program
JP7338632B2 (en) Display device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15900656

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2015900656

Country of ref document: EP