WO2019218789A1 - 用于演示车载抬头显示装置的功能的方法、系统及计算机可读存储介质 - Google Patents
用于演示车载抬头显示装置的功能的方法、系统及计算机可读存储介质 Download PDFInfo
- Publication number
- WO2019218789A1 WO2019218789A1 PCT/CN2019/080960 CN2019080960W WO2019218789A1 WO 2019218789 A1 WO2019218789 A1 WO 2019218789A1 CN 2019080960 W CN2019080960 W CN 2019080960W WO 2019218789 A1 WO2019218789 A1 WO 2019218789A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- virtual
- area
- image
- coordinate system
- display area
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 230000006870 function Effects 0.000 title claims abstract description 17
- 238000004590 computer program Methods 0.000 claims description 21
- 238000010586 diagram Methods 0.000 description 6
- 230000004927 fusion Effects 0.000 description 4
- 210000003128 head Anatomy 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 239000000446 fuel Substances 0.000 description 2
- 238000001454 recorded image Methods 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001737 promoting effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R1/00—Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/20—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
- B60K35/21—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
- B60K35/23—Head-up displays [HUD]
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/90—Calibration of instruments, e.g. setting initial or reference parameters; Testing of instruments, e.g. detecting malfunction
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60R—VEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
- B60R16/00—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for
- B60R16/02—Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric constitutive elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/16—Type of output information
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K2360/00—Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
- B60K2360/20—Optical features of instruments
- B60K2360/21—Optical features of instruments using cameras
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60K—ARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
- B60K35/00—Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
- B60K35/20—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
- B60K35/28—Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics information; characterised by the purpose of the output information, e.g. for attracting the attention of the driver
Definitions
- Embodiments of the present disclosure relate to the field of vehicle head-up display technology, and more particularly to a method for demonstrating the function of a vehicle head-up display device.
- Heads Up Display is increasingly used in automobiles.
- the head-up display on the car is mounted on the dashboard of the car, using the principle of optical reflection to transmit important driving information such as speed, engine revolutions, fuel consumption, tire pressure, navigation and information on external smart devices to text and / or
- the icon is projected in real time on the windshield.
- the height of the image projected on the windshield is generally at the same level as the driver's eyes, so that the driver can see the driving information without having to bow his head, thereby avoiding distracting attention to the road ahead. Therefore, driving safety has been greatly improved.
- Embodiments of the present disclosure provide a method, system, and computer readable storage medium for demonstrating the functionality of an on-board heads-up display device.
- a method for demonstrating a function of an on-vehicle head-up display device comprising: setting a projection area for displaying a first image, wherein the projection area and the on-board head-up display device are virtual Displaying area coincidence; obtaining a relative position and a relative size relationship between the projection area and the virtual display area; projecting the first image in the projection area, the first image including one to be marked or a plurality of objects; capturing a second image, wherein the second image is an image of the projection area including the first image; generating in the second image based on the relative position and the relative size relationship a virtual identification area; determining whether an object in the second image is located in the virtual identification area, and determining that the object is in the virtual identification area if the object is located in the virtual identification area a location; determining, based on the first location, a display location of the virtual marker representing the object in the virtual display area; The display location in the virtual display area displays the virtual tag.
- generating a virtual recognition region in the second image includes: causing a relative position of the second image to the virtual recognition region and the The relative size relationship is the same as the relative position and relative size relationship between the projection area and the virtual display area.
- the shape of the projection area and the virtual display area are rectangular, a bottom edge of the projection area and a bottom edge of the virtual display area Coincident, and the center of the bottom side of the projection area coincides with the center of the bottom side of the virtual display area.
- Obtaining a relative position and a relative size relationship between the projection area and the virtual display area includes: establishing a first coordinate system such that a horizontal axis of the first coordinate system coincides with a bottom edge of the projection area, The longitudinal axis of the first coordinate system passes through the center of the bottom edge of the projection area; the coordinates of the first end point and the second end point of the top edge of the projection area are respectively determined in the first coordinate system; Determining a coordinate of a third end point and a fourth end point of a top edge of the virtual display area in the first coordinate system; and determining a ratio of a length of a top edge of the projection area to a length of a top side of the virtual display area R1, and a ratio r2 of the length of the side of the projection area to the length of the side of the virtual display area.
- the coordinates of the third and fourth endpoints of the top edge of the virtual display area are as follows:
- X C and Y C are the abscissa and the ordinate of the third endpoint, respectively;
- X D and Y D are the abscissa and the ordinate of the fourth endpoint, respectively;
- s is the window of the vehicle head-up display device The distance from the center to the projection area;
- ⁇ and ⁇ are the horizontal field of view and the vertical field of view of the on-board head-up display device, respectively.
- generating a virtual recognition region in the second image includes establishing a second coordinate system such that a horizontal axis of the second coordinate system a bottom edge of the second image coincides, and a vertical axis of the second coordinate system passes through a center of a bottom edge of the second image; a pixel distance d1 between two end points of a top edge of the second image is determined a pixel distance d2 between the two end points of the side of the second image; and defining a length of a top edge of the virtual recognition area as d1/r1 and defining a side length of the virtual recognition area as D2/r2, the coordinates of the two end points of the top edge of the virtual recognition area in the second coordinate system are (-d1/2r1, d2/r2) and (d1/2r1, d2/, respectively) R2), and the coordinates of the two end points of the bottom edge of the virtual recognition region in the second coordinate system are (-dd1/2r1, d2/r2) and (d1/2r1, d2/, respectively) R2), and
- an abscissa of the display position of the virtual marker in the virtual display area in the first coordinate system and the virtual display area The ratio of the top edge of the first position is equal to the ratio of the abscissa of the first position in the second coordinate system to the top edge of the virtual recognition area; the display position of the virtual mark in the virtual display area is The ratio of the ordinate in the first coordinate system to the side of the virtual display area is equal to the ratio of the ordinate of the first position in the second coordinate system to the side of the virtual recognition area.
- the first image is an image that displays road conditions.
- the object includes at least one of a vehicle, a pedestrian, or a road sign.
- a system for demonstrating functions of an on-board head-up display device includes: at least one processor; and at least one memory storing computer program code, wherein when the computer program code When executed by the at least one processor, causing the system to perform at least an operation of setting a projection area for displaying a first image, wherein the projection area coincides with a virtual display area of the on-vehicle head-up display device; a relative position and a relative size relationship between the projection area and the virtual display area; projecting the first image in the projection area, the first image including one or more objects to be marked; capturing a second image, wherein the second image is an image including a projection area of the first image; generating a virtual recognition area in the second image based on the relative position and the relative size relationship; determining the first Whether the object in the two images is located in the virtual recognition area, and the object is located in the virtual identification area a first location of the object within the virtual identification area; determining, based on the first location,
- causing the system to generate in the second image by The virtual recognition area is such that a relative position and a relative size relationship between the second image and the virtual recognition area are the same as a relative position and a relative size relationship between the projection area and the virtual display area.
- the projection area and the virtual display area are rectangular in shape, a bottom edge of the projection area and a bottom edge of the virtual display area Coincident, and the center of the bottom side of the projection area coincides with the center of the bottom side of the virtual display area.
- the system obtains a relative position and a relative size relationship between the projection area and the virtual display area by: establishing a first coordinate system a horizontal axis of the first coordinate system coincides with a bottom edge of the projection area, a vertical axis of the first coordinate system passes through a center of a bottom edge of the projection area; and a top edge of the projection area is respectively determined The coordinates of the first endpoint and the second endpoint in the first coordinate system;
- the coordinates of the third and fourth endpoints of the top edge of the virtual display area are as follows:
- X C and Y C are the abscissa and the ordinate of the third endpoint, respectively;
- X D and Y D are the abscissa and the ordinate of the fourth endpoint, respectively;
- s is the window of the vehicle head-up display device The distance from the center to the projection area;
- ⁇ and ⁇ are the horizontal field of view and the vertical field of view of the on-board head-up display device, respectively.
- a virtual recognition area establishing a second coordinate system such that a horizontal axis of the second coordinate system coincides with a bottom edge of the second image, and a vertical axis of the second coordinate system passes a bottom edge of the second image a center; a pixel distance d1 between two end points of a top edge of the second image and a pixel distance d2 between two end points of a side of the second image; and a virtual recognition area
- the length of the top edge is defined as d1/r1 and the length of the side of the virtual recognition area is defined as d2/r2, and then the two end points of the top edge of the virtual recognition area are in the second coordinate system
- the coordinates are (-d1/2r1, d2/r2) and (d1/2r1, d2/r2), respectively, and the coordinates of the two end points of the two end points of
- an abscissa of the display position of the virtual marker in the virtual display area in the first coordinate system and the virtual display area The ratio of the top edge of the first position is equal to the ratio of the abscissa of the first position in the second coordinate system to the top edge of the virtual recognition area; the display position of the virtual mark in the virtual display area is The ratio of the ordinate in the first coordinate system to the side of the virtual display area is equal to the ratio of the ordinate of the first position in the second coordinate system to the side of the virtual recognition area.
- the first image is an image that displays road conditions.
- the object includes at least one of a vehicle, a pedestrian, or a road sign.
- a computer readable storage medium storing computer program code.
- the method steps for demonstrating the functionality of the on-board heads-up display device in any of the embodiments of the present disclosure are implemented when the computer program code is executed by a processor.
- FIG. 1 shows a schematic diagram of a system for demonstrating an example of the functionality of an on-board head-up display device
- FIG. 2 shows a flow chart of a method for demonstrating the functionality of an on-board heads-up display device, provided in accordance with an exemplary embodiment of the present disclosure
- FIG. 3 illustrates a positional relationship between a projection area and a virtual display area of a vehicle-mounted HUD in an exemplary embodiment of the present disclosure
- FIG. 4 illustrates a flow chart for obtaining a relative position and a relative size relationship between a projection area and a virtual display area in an exemplary embodiment of the present disclosure
- FIG. 5 illustrates a positional relationship between a second image and a virtual recognition area in an exemplary embodiment of the present disclosure
- FIG. 6 illustrates a flow diagram of generating a virtual identification region in a second image in an exemplary embodiment of the present disclosure
- FIG. 7 shows a schematic block diagram of a system 70 for demonstrating the functionality of an on-board head-up display device.
- FIG. The following figures and examples are not meant to limit the scope of the disclosure.
- a particular component of the present disclosure may be implemented in part or in whole using known components (or methods or processes), only those portions of such known components (or methods or processes) required to understand the present disclosure will be described. The detailed description of the other parts of the known components will be omitted so as not to obscure the present disclosure.
- the various embodiments are intended to encompass the present and future equivalents equivalent to the components referred to herein.
- the head-up display device may also have the function of merging the driving road condition with the virtual image of the head-up display device (ie, virtual reality fusion).
- a peripheral object for example, a vehicle, a pedestrian, or a landmark
- a display indicating the surrounding object is displayed at a real position of the virtual image of the head-up display device corresponding to the peripheral object Virtual markup, which enables the fusion of virtual images and actual driving conditions.
- An example of realizing the fusion of virtual and real is to install a camera on the vehicle.
- the camera captures the video of the surrounding road conditions in real time, and transmits the captured video to the HUD in real time.
- Processing unit that can identify the location of an object in each frame of the video.
- a virtual marker is placed at the actual location of the virtual image of the HUD corresponding to the object such that the location of the virtual marker matches the actual location of the object in the road.
- the captured video of the surrounding road can relatively restore the real road condition.
- the location of the virtual marker identified in the virtual image can be better matched to the location of the object in the real road.
- FIG. 1 shows a schematic diagram of a system for demonstrating an example of the function of an on-board head-up display device. As shown in FIG.
- the system of this example may include a simulated vehicle 102 having a front windshield 101; a camera 103 disposed on the front windshield 101; a projector 104 disposed in front of the front windshield 101; configured A projection screen 105 at a certain distance in front of the front windshield 101 for receiving an image projected by the projector 104; and an on-board HUD 106 configured to project a virtual image of driving information and road condition information onto the windshield 101.
- the projection screen 105 can be set to overlap with the virtual display area 107 of the virtual image of the HUD (as shown in FIG. 1).
- the “virtual display area” may refer to an area in which the virtual image is located when the driver views the virtual image of the HUD through the front windshield 101 .
- the size of the projection screen 105 may not be too large, so the video image projected by the projector 104 cannot truly restore the road condition information.
- the position of the virtual mark of the object identified in the virtual image may not match the position of the object in the projection screen 105.
- the mismatch may include the fact that the driver observes that a certain vehicle in the projection screen 105 is clearly away from the field of view of the virtual image of the HUD, but the virtual marker corresponding to the object still appears in the HUD virtual image; or the driver observes A pedestrian into the projection screen 105 has entered the field of view of the virtual image of the HUD, but there is no virtual marker identifying the object in the virtual image of the HUD. Whatever the mismatch, it will seriously affect the driving experience.
- the present disclosure provides a method for demonstrating a function of an on-board head-up display device that can match a virtual mark displayed in a virtual image with an actual position of an object represented by the virtual mark, thereby improving a driving experience.
- FIG. 2 shows a flow chart of a method for demonstrating the functionality of an on-board heads-up display device, provided in accordance with an exemplary embodiment of the present disclosure. As shown in FIG. 2, the method includes the following steps:
- S202 setting a projection area for displaying the first image, wherein the projection area overlaps with the virtual display area of the on-board head display device;
- S208 capturing a second image, where the second image is an image including a projection area of the first image
- S210 Generate a virtual identification area in the second image based on the relative position and the relative size relationship
- S212 determining whether an object in the second image is located in the virtual recognition area, and determining that the object is in the first position in the virtual identification area if the object is located in the virtual recognition area;
- S214 Determine, according to the first location, a display location of the virtual marker representing the object in the virtual display area;
- S216 Display a virtual mark at a display position in the virtual display area.
- step S202 the projection area 108 for displaying the first image 110 is set such that the projection area 108 coincides with the virtual display area 107 of the on-vehicle head-up display device 106.
- the coincidence may be partially coincident.
- the coincidence can be understood as the distance between the projection area 108 and the human eye (the driver's eyes) and the distance between the virtual displayed image and the human eye (image distance). It will be appreciated that it is not possible for those skilled in the art to achieve absolute equality of distances, which also allows for certain errors.
- the first image 110 is an image displaying a road condition
- the specific first image may be a video image of a peripheral road condition (eg, in advance or in real time) captured by a camera disposed on the vehicle in a real driving mode. It can include one or more frames of images.
- the first image 110 may include one or more objects around the vehicle being driven, such as a vehicle, a pedestrian, or a road sign.
- projection area 108 refers to an area in which road condition information is projected by projector 104, which may be equal to or less than the area occupied by the projection screen.
- step S204 a relative positional relationship and a relative dimensional relationship between the projection area (solid line frame) 108 and the virtual display area (dashed line frame) 107 can be obtained.
- FIG. 3 illustrates a positional relationship between the projection area 108 and the virtual display area 107 of the in-vehicle HUD in the exemplary embodiment of the present disclosure.
- the shape of the projection area 108 and the virtual display area 107 are rectangular.
- the bottom edge of the projection area 108 coincides with the bottom edge of the virtual display area 107 and the center of the bottom side of the projection area 108 coincides with the center of the bottom side of the virtual display area 107.
- FIG. 4 shows a flow chart for obtaining a relative position and relative size relationship between a projection area 108 and a virtual display area 107 in an exemplary embodiment of the present disclosure.
- the relative position and relative size relationship between the projection area 108 and the virtual display area 107 can be obtained by steps S2042-S2048.
- a first coordinate system (as shown in FIG. 3) is established such that the horizontal axis (X-axis) of the first coordinate system coincides with the bottom edge of the projection region 108, and thus also coincides with the bottom edge of the virtual display region 107. And the vertical axis (Y axis) of the first coordinate system passes through the center of the bottom edge of the projection area 108.
- step S2044 coordinates of the first end point A and the second end point B of the top edge of the projection area 108 in the first coordinate system, namely A(-a/2, b) and B(a/2, are respectively determined.
- a and b are the length and width of the projection area 108, respectively.
- the relative positional relationship between the projection area 108 and the virtual display area 107 can be obtained according to the coordinates of the first end point A, the second end point B, the third end point C, and the fourth end point D.
- step S2048 according to the coordinates of the third endpoint C and the fourth endpoint D, the length of the top edge CD of the virtual display region 107 can be obtained as 2s*tan( ⁇ /2), and thus the top edge AB of the projection region 108 can be determined.
- the first image 110 pre-recorded during the actual road travel may be projected to the projection area 108.
- the first image 110 projected in the projection area 108 contains three vehicles (11, 12, 13) in which the vehicle 13 enters the virtual display area 107.
- a second image 111 containing the projection area 108 of the first image 110 is captured.
- the projection area 108 including the first image 110 refers to the first image 110 projected on the projection area 108.
- the second image 111 of the projection area 108 can be captured by the camera 103 in the system shown in FIG. 1, which is exemplarily shown in the solid line frame in FIG.
- the second object 111 may contain the same objects as in the first image 110, for example, three vehicles 11', 12', 13'.
- step S210 the virtual recognition region 112 is generated in the second image 111 based on the relative position and relative size relationship obtained in step S204.
- the virtual recognition area 112 refers to an area in the second image for determining whether a virtual mark representing an object needs to be displayed in the virtual display area 108, the virtual recognition area 112 being in FIG. 5 The dashed box is shown.
- the virtual recognition region 112 is generated in the second image 11 such that the relative position and relative size relationship of the second image 111 and the virtual recognition region 112 and the relative position between the projection region 108 and the virtual display region 107 and The relative size relationship is the same.
- the bottom edge of the second image 111 coincides with the bottom edge of the virtual recognition region 112
- the center of the bottom edge of the second image 111 coincides with the center of the bottom edge of the virtual recognition region 112
- the top edge of the second image 111 ( The ratio of the top edge (and/or the bottom edge) of the virtual recognition area 112 to r1 is equal to r2, and the ratio of the side of the second image 111 to the side of the virtual recognition area 112 is equal to r2.
- FIG. 6 illustrates a flow diagram of generating a virtual identification region in a second image in an exemplary embodiment of the present disclosure. As shown in FIG. 6, a virtual recognition area may be generated in the second image through steps S2102-S2106.
- step S2102 a second coordinate system is established (as shown in FIG. 5) such that the horizontal axis X' of the second coordinate system coincides with the bottom edge of the second image, and the vertical axis Y' of the second coordinate system passes the second image.
- the center of the bottom edge of 111 is established (as shown in FIG. 5) such that the horizontal axis X' of the second coordinate system coincides with the bottom edge of the second image, and the vertical axis Y' of the second coordinate system passes the second image. The center of the bottom edge of 111.
- step S2104 the pixel distance d1 between the two end points A' and B' of the top side of the second image 111 and the pixel distance d2 between the two end points of the side of the second image 111 are determined.
- the pixel distance may be represented by the number of pixels spanned by a line segment on the second image.
- the length of the top edge of the virtual identification area 112 is defined as d1/r1 and the side length of the virtual identification area 112 is defined as d2/r2.
- the coordinates of the two end points C' and D' of the top edge of the virtual recognition region 112 in the second coordinate system that is, C'(-d1/2r1, d2/r2) and D' (d1/) can be obtained.
- 2r1, d2/r2), and the coordinates of the two end points of the bottom edge of the virtual recognition area 112 in the second coordinate system are (-d1/2r1, 0) and (d1/2r1, 0), respectively.
- the position and size of the virtual recognition area 112 in the second image 111 can be determined.
- step 212 it is determined whether the object in the second image 111 is located in the virtual recognition area 112, and in the case where the object is located in the virtual recognition area 112, the first position of the object within the virtual recognition area 112 is determined.
- the second image 111 may be processed to identify an object in the second image 111 and determine whether the identified object is located in the virtual recognition area 112.
- the first position of the object within the virtual recognition area 112 is acquired.
- the second image 111 has three vehicles 11', 12', 13'. However, it is judged that only the vehicle 13' is located in the virtual recognition area, so only the position of the vehicle 13' is acquired (assuming that the position coordinates of the vehicle 13' is (e, f)), but not for the vehicle 11' and the vehicle 12'. deal with.
- a virtual marker 113 representing the object is determined to be in the virtual display area 107.
- the display position in .
- the display position of the virtual marker 113 in the virtual display area 107 may be determined such that the display position of the virtual marker 113 in the virtual display area 107 is in the first coordinate system and the virtual display area 107
- the ratio of the top edge is equal to the ratio of the abscissa of the first position in the second coordinate system to the top edge of the virtual recognition area 112; the ordinate of the display position of the virtual mark 113 in the virtual display area 107 in the first coordinate system
- the ratio of the sides of the virtual display area 107 is equal to the ratio of the ordinate of the first position in the second coordinate system to the side of the virtual recognition area 112.
- the relationship between the coordinates of the display position of the virtual mark in the virtual display area 107 and the coordinates of the object in the virtual recognition area 112 is not limited to the embodiment described herein, and this does not constitute a limitation of the present disclosure.
- the approximate coordinates of the display position may be determined such that the abscissa of the approximate coordinate and the top edge ratio of the virtual display area 107 are equal to the ordinate of the first position.
- the ratio to the top edge of the virtual recognition area 112, and the ratio of the ordinate of the approximate coordinate to the side of the virtual display area 107 is equal to the ratio of the ordinate of the first position to the side of the virtual recognition area 112.
- the position within the predetermined range of the approximate coordinates can then be used as the display position.
- the lengths of the top and side sides of the virtual display area 107 are u and v, respectively (as shown in FIG. 3); the lengths of the top and side sides of the virtual identification area 112 are g and h, respectively.
- the coordinates of the first position of the object in the virtual recognition area 112 are (e, f), then the abscissa and the ordinate of the display position of the virtual mark 113 in the virtual display area 107 can be determined. They are e/g ⁇ u and f/h ⁇ v, respectively.
- the virtual mark 113 is displayed at the display position in the virtual display area 107.
- This virtual mark 113 can be represented by an arrow as shown in FIG.
- the shapes of the virtual markers used herein are merely exemplary and are not intended to limit the scope of the present disclosure to that particular shape. Those skilled in the art can appropriately select the shape of other suitable virtual markers according to the serial number.
- the virtual image is usually a magnified image projected on the front windshield after the image source is reflected multiple times.
- it is necessary to determine the virtual mark in the image.
- the corresponding location in the source it is necessary to determine the position of the virtual mark in the image.
- the present disclosure does not specifically limit how to determine the position of the virtual mark in the image source according to the position of the virtual mark to be marked in the virtual display area.
- a person skilled in the art can select an appropriate method to determine the position of the virtual marker in the image source according to the structural composition of the HUD and the number of times the image source forms a virtual image.
- the virtual mark 113 can be displayed in real time in a position corresponding to the object in the virtual display area 107 of the HUD, thereby causing the virtual mark 113 and the object
- the actual position matching can achieve better virtual and real fusion. This can improve the driving experience.
- FIG. 7 shows a schematic block diagram of a system 70 for demonstrating the functionality of an on-board head-up display device.
- the system can include at least one processor 71 and at least one memory 72 storing computer program code.
- processor 71 When the computer program code is executed by processor 71, system 70 can be caused to perform the method steps of FIG.
- the system may: set a projection area for displaying the first image, wherein the projection area coincides with a virtual display area of the on-vehicle head-up display device; and obtain between the projection area and the virtual display area Relative position and relative size relationship; projecting the first image in the projection area, the first image includes one or more objects to be marked; capturing a second image, wherein the second image is included An image of a projection area of the first image; generating a virtual recognition area in the second image based on the relative position and the relative size relationship; determining whether an object in the second image is located in the virtual recognition In the region, in a case where the object is located in the virtual recognition region, determining a first location of the object within the virtual recognition region; and determining, based on the first location, a virtual marker representing the object a display position in the virtual display area; and the display position in the virtual display area displays the virtual mark.
- Processor 71 may be, for example, a central processing unit CPU, a microprocessor, a digital signal processor (DSP), a multi-core based processor architecture processor, or the like.
- Memory 72 can be any type of memory implemented using data storage techniques including, but not limited to, random access memory, read only memory, semiconductor based memory, flash memory, disk storage, and the like.
- system 70 can also include an input device 73, such as a keyboard, mouse, etc., for related parameter settings, and the like.
- system 70 may further include an output device 73, such as a display or the like, for outputting a function demonstration result of the on-board head-up display device or related driving information, speed, engine revolutions, fuel consumption, tire pressure, navigation, and information of external smart devices.
- the system 70 when the computer program code is executed by the at least one processor 71, the system 70 causes the virtual recognition area to be generated in the second image by: causing the second The relative position of the image and the virtual recognition area and the relative size relationship are the same as the relative position and relative size relationship between the projection area and the virtual display area.
- the projection area and the virtual display area are rectangular in shape, a bottom edge of the projection area coincides with a bottom edge of the virtual display area, and a bottom edge of the projection area The center coincides with the center of the bottom edge of the virtual display area.
- the system 70 obtains a relative position and a relative size relationship between the projection area and the virtual display area by: establishing a first a coordinate system such that a horizontal axis of the first coordinate system coincides with a bottom edge of the projection area, and a vertical axis of the first coordinate system passes through a center of a bottom edge of the projection area; respectively determining the projection area Coordinates of the first end point and the second end point of the top edge in the first coordinate system; respectively determining coordinates of the third end point and the fourth end point of the top edge of the virtual display area in the first coordinate system And determining a ratio r1 of a length of a top edge of the projection area to a length of a top side of the virtual display area, and a ratio of a length of a side of the projection area to a length of a side of the virtual display area R2.
- the coordinates of the third end point and the fourth end point of the top edge of the virtual display area are as follows:
- X C and Y C are the abscissa and the ordinate of the third endpoint, respectively;
- X D and Y D are the abscissa and the ordinate of the fourth endpoint, respectively;
- s is the window of the vehicle head-up display device The distance from the center to the projection area;
- ⁇ and ⁇ are the horizontal field of view and the vertical field of view of the on-board head-up display device, respectively.
- the computer program code when executed by the at least one processor 70, causes the system 70 to generate a virtual identification region in the second image by: establishing a second coordinate So that the horizontal axis of the second coordinate system coincides with the bottom edge of the second image, and the vertical axis of the second coordinate system passes through the center of the bottom edge of the second image; a pixel distance d1 between two end points of a top edge of the image and a pixel distance d2 between two end points of a side of the second image; and defining a length of a top edge of the virtual recognition area as d1/ R1 and defining a side length of the virtual recognition area as d2/r2, wherein coordinates of the two end points of the top edge of the virtual recognition area in the second coordinate system are respectively (-d1/ 2r1, d2/r2) and (d1/2r1, d2/r2), and the coordinates of the two end points of the bottom edge of the virtual recognition region in the second coordinate system are respectively (-d1/2r
- a ratio of an abscissa of the display position of the virtual mark in the virtual display area to a top edge of the virtual display area in the first coordinate system is equal to the first a ratio of an abscissa of the position in the second coordinate system to a top edge of the virtual recognition area; an ordinate of the display position of the virtual marker in the virtual display area in the first coordinate system
- the ratio of the sides of the virtual display area is equal to the ratio of the ordinate of the first position in the second coordinate system to the side of the virtual recognition area.
- the first image is an image that displays road conditions.
- the object comprises at least one of a vehicle, a pedestrian, or a road sign.
- a computer readable storage medium is also provided that is stored by computer program code.
- the computer program code when executed by the processor, implements the method steps for demonstrating the functionality of the on-board heads-up display device as illustrated in Figures 2, 4, and 6.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Chemical & Material Sciences (AREA)
- Combustion & Propulsion (AREA)
- Transportation (AREA)
- Instrument Panels (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
Description
Claims (17)
- 一种用于演示车载抬头显示装置的功能的方法,包括:设定用于显示第一图像的投影区域,其中所述投影区域和所述车载抬头显示装置的虚拟显示区域重合;获得所述投影区域与所述虚拟显示区域之间的相对位置及相对尺寸关系;在所述投影区域中投影所述第一图像,所述第一图像中包括待标记的一个或多个对象;捕获第二图像,其中所述第二图像为包含所述第一图像的投影区域的图像;基于所述相对位置及所述相对尺寸关系,在所述第二图像中生成虚拟识别区域;确定所述第二图像中的对象是否位于所述虚拟识别区域中,在所述对象位于所述虚拟识别区域中的情况下,确定所述对象在所述虚拟识别区域内的第一位置;基于所述第一位置,确定表示所述对象的虚拟标记在所述虚拟显示区域中的显示位置;以及在所述虚拟显示区域中的所述显示位置显示所述虚拟标记。
- 根据权利要求1所述的方法,其中,在所述第二图像中生成虚拟识别区域包括:使得所述第二图像与所述虚拟识别区域的相对位置及所述相对尺寸关系与所述投影区域与所述虚拟显示区域之间的相对位置及相对尺寸关系相同。
- 根据权利要求2所述的方法,其中,所述投影区域和所述虚拟显示区域的形状为矩形,所述投影区域的底边与所述虚拟显示区域的底边重合,并且所述投影区域的底边的中心与所述虚拟显示区域的底边的中心重合,其中,获得所述投影区域与所述虚拟显示区域之间的相对位置及相对尺寸关系包括:建立第一坐标系,使得所述第一坐标系的横轴与所述投影区域的底边重合,所述第一坐标系的纵轴通过所述投影区域的底边的中心;分别确定所述投影区域的顶边的第一端点和第二端点在所述第一坐标系中的坐标;分别确定所述虚拟显示区域的顶边的第三端点和第四端点在所述第一坐标系中的坐标;以及确定所述投影区域的顶边的长度与所述虚拟显示区域的顶边的长度的比率r1,以及所述投影区域的侧边的长度与所述虚拟显示区域的侧边的长度的比率r2。
- 根据权利要求3所述的方法,其中,所述虚拟显示区域的顶边的第三端点和第四端点的坐标如下式:X C=-s*tan(α/2),Y C=2s*tan(β/2);X D=s*tan(α/2),Y D=2s*tan(β/2);其中,X C和Y C分别为所述第三端点的横坐标和纵坐标;X D和Y D分别为所述第四端点的横坐标和纵坐标;s为所述车载抬头显示装置的视窗中心到所述投影区域的距离;α和β分别为所述车载抬头显示装置的水平视场角和竖直视场角。
- 根据权利要求4所述的方法,其中,在所述第二图像中生成虚拟识别区域包括:建立第二坐标系,使得所述第二坐标系的横轴与所述第二图像的底边重合,并且所述第二坐标系的纵轴通过所述第二图像的底边的中心;确定所述第二图像的顶边的两个端点之间的像素距离d1和所述第二图像的侧边的两个端点之间的像素距离d2;以及将所述虚拟识别区域的顶边的长度限定为d1/r1并将所述虚拟识别区域的侧边长度限定为d2/r2,则所述虚拟识别区域的所述顶边的两个端点的在所述第二坐标系中的坐标分别为(-d1/2r1,d2/r2)和(d1/2r1,d2/r2),并且所述虚拟识别区域的底边的两个端点的在所述第二坐标系中的坐标分别为(-d1/2r1,0)和(d1/2r1,0)。
- 根据权利要求5所述的方法,其中,所述虚拟标记在所述虚拟显示区域中的显示位置在所述第一坐标系中的横坐标与所述虚拟显示区域的顶边的比率等于所述第一位置在所述第二坐标系中的横坐标与所述虚拟识别区域的顶边的比率;所述虚拟标记在所述虚拟显示区域中的显示位置在所述第一坐标系中的纵坐标与所述虚拟显示区域的侧边的比率等于所述第一位置在所述第二坐标系中的纵坐标与所述虚拟识别区域的侧边的比率。
- 根据权利要求1至6中任一项所述的方法,其中,所述第一图像为显示路况的图像。
- 根据权利要求7所述的方法,其中,所述对象包括车辆、行人或路标中的至少一种。
- 一种用于演示车载抬头显示装置的功能的系统,包括:至少一个处理器;以及至少一个存储器,其存储有计算机程序代码,其中,当所述计算机程序代码由所述至少一个处理器执行时使得所述系统执行至少以下操作:设定用于显示第一图像的投影区域,其中所述投影区域和所述车载抬头显示装置的虚拟显示区域重合;获得所述投影区域与所述虚拟显示区域之间的相对位置及相对尺寸关系;在所述投影区域中投影所述第一图像,所述第一图像中包括待标记的一个或多个对象;捕获第二图像,其中所述第二图像为包含所述第一图像的投影区域的图像;基于所述相对位置及所述相对尺寸关系,在所述第二图像中生成虚拟识别区域;确定所述第二图像中的对象是否位于所述虚拟识别区域中,在所述对象位于所述虚拟识别区域中的情况下,确定所述对象在所述虚拟识别区域内的第一位置;基于所述第一位置,确定表示所述对象的虚拟标记在所述虚拟显示区域中的显示位置;以及在所述虚拟显示区域中的所述显示位置显示所述虚拟标记。
- 根据权利要求9所述的系统,其中,当所述计算机程序代码由所述至少一个处理器执行时使得所述系统通过以下操作来在所述第二图像中生成虚拟识别区域:使得所述第二图像与所述虚拟识别区域的相对位置及所述相对尺寸关系与所述投影区域与所述虚拟显示区域之间的相对位置及相对尺寸关系相同。
- 根据权利要求10所述的系统,其中,所述投影区域和所述虚拟显示区域的形状为矩形,所述投影区域的底边与所述虚拟显示区域的底边重合,并且所述投影区域的底边的中心与所述虚拟显示区域的底边的中心重合,其中,当所述计算机程序代码由所述至少一个处理器执行时使得所述系统通过以下操作获得所述投影区域与所述虚拟显示区域之间的相对位置及相对尺寸关系:建立第一坐标系,使得所述第一坐标系的横轴与所述投影区域的底边重合,所述第一坐标系的纵轴通过所述投影区域的底边的中心;分别确定所述投影区域的顶边的第一端点和第二端点在所述第一坐标系中的坐标;分别确定所述虚拟显示区域的顶边的第三端点和第四端点在所述第一坐标系中的坐标;以及确定所述投影区域的顶边的长度与所述虚拟显示区域的顶边的长度的比率r1,以及所述投影区域的侧边的长度与所述虚拟显示区域的侧边的长度的比率r2。
- 根据权利要求11所述的系统,其中,所述虚拟显示区域的顶边的第三端点和第四端点的坐标如下式:X C=-s*tan(α/2),Y C=2s*tan(β/2);X D=s*tan(α/2),Y D=2s*tan(β/2);其中,X C和Y C分别为所述第三端点的横坐标和纵坐标;X D和Y D分别为所述第四端点的横坐标和纵坐标;s为所述车载抬头显示装置的视窗中心到所述投影区域的距离;α和β分别为所述车载抬头显示装置的水平视场角和竖直视场角。
- 根据权利要求12所述的系统,其中,当所述计算机程序代码由所述至少一个处理器执行时使得所述系统通过以下操作来在所述第二图像中生成虚拟识别区域:建立第二坐标系,使得所述第二坐标系的横轴与所述第二图像的底边重合,并且所述第二坐标系的纵轴通过所述第二图像的底边的中心;确定所述第二图像的顶边的两个端点之间的像素距离d1和所述第二图像的侧边的两个端点之间的像素距离d2;以及将所述虚拟识别区域的顶边的长度限定为d1/r1并将所述虚拟识别区域的侧边长度限定为d2/r2,则所述虚拟识别区域的所述顶边的两个端点的在所述第二坐标系中的坐标分别为(-d1/2r1,d2/r2)和(d1/2r1,d2/r2),并且所述虚拟识别区域的底边的两个端点的在所述第二坐标系中的坐标分别为(-d1/2r1,0)和(d1/2r1,0)。
- 根据权利要求13所述的系统,其中,所述虚拟标记在所述虚拟显示区域中的显示位置在所述第一坐标系中的横坐标与所述虚拟显示区域的顶边的比率等于所述第一位置在所述第二坐标系中的横坐标与所述虚拟识别区域的顶边的比率;所述虚拟标记在所述虚拟显示区域中的显示位置在所述第一坐标系中的纵坐标与所述虚拟显示区域的侧边的比率等于所述第一位置在所述第二坐标系中的纵坐标与所述虚拟识别区域的侧边的比率。
- 根据权利要求9至14中任一项所述的系统,其中,所述第一图像为显示路况的图像。
- 根据权利要求15所述的系统,其中,所述对象包括车辆、行人或路标中的至少一种。
- 一种计算机可读存储介质,其存储有计算机程序代码,其中,当 所述计算机程序代码在由处理器执行时实现权利要求1至8任一项所述的用于演示车载抬头显示装置的功能的方法步骤。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/758,132 US11376961B2 (en) | 2018-05-14 | 2019-04-02 | Method and system for demonstrating function of vehicle-mounted heads up display, and computer-readable storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810453847.3A CN108528341B (zh) | 2018-05-14 | 2018-05-14 | 用于演示车载抬头显示装置的功能的方法 |
CN201810453847.3 | 2018-05-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019218789A1 true WO2019218789A1 (zh) | 2019-11-21 |
Family
ID=63477359
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/080960 WO2019218789A1 (zh) | 2018-05-14 | 2019-04-02 | 用于演示车载抬头显示装置的功能的方法、系统及计算机可读存储介质 |
Country Status (3)
Country | Link |
---|---|
US (1) | US11376961B2 (zh) |
CN (1) | CN108528341B (zh) |
WO (1) | WO2019218789A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108528341B (zh) | 2018-05-14 | 2020-12-25 | 京东方科技集团股份有限公司 | 用于演示车载抬头显示装置的功能的方法 |
CN110136519B (zh) * | 2019-04-17 | 2022-04-01 | 阿波罗智联(北京)科技有限公司 | 基于arhud导航的模拟系统和方法 |
CN113434614B (zh) * | 2020-03-23 | 2024-08-06 | 北京四维图新科技股份有限公司 | 地图数据关联方法、装置及电子设备 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08160846A (ja) * | 1994-12-07 | 1996-06-21 | Mitsubishi Heavy Ind Ltd | 航空機用訓練支援装置 |
US20110183301A1 (en) * | 2010-01-27 | 2011-07-28 | L-3 Communications Corporation | Method and system for single-pass rendering for off-axis view |
CN104504960A (zh) * | 2014-11-29 | 2015-04-08 | 江西洪都航空工业集团有限责任公司 | 一种虚拟平显在飞行训练器epx视景系统中的应用方法 |
US9251715B2 (en) * | 2013-03-15 | 2016-02-02 | Honda Motor Co., Ltd. | Driver training system using heads-up display augmented reality graphics elements |
JP2017102331A (ja) * | 2015-12-03 | 2017-06-08 | トヨタ自動車株式会社 | 車両用ヘッドアップディスプレイの評価支援装置 |
CN107107834A (zh) * | 2014-12-22 | 2017-08-29 | 富士胶片株式会社 | 投影型显示装置、电子设备、驾驶者视觉辨认图像共享方法以及驾驶者视觉辨认图像共享程序 |
CN108528341A (zh) * | 2018-05-14 | 2018-09-14 | 京东方科技集团股份有限公司 | 用于演示车载抬头显示装置的功能的方法 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0922245A (ja) * | 1995-07-05 | 1997-01-21 | Mitsubishi Heavy Ind Ltd | 模擬視界装置を有するシミュレータ |
US9106811B2 (en) * | 2011-07-21 | 2015-08-11 | Imax Corporation | Generalized normalization for image display |
CN102427541B (zh) * | 2011-09-30 | 2014-06-25 | 深圳创维-Rgb电子有限公司 | 一种显示立体图像的方法及装置 |
CN205232340U (zh) * | 2015-12-28 | 2016-05-11 | 韩美英 | 一种虚拟成像动态演示系统 |
KR102576654B1 (ko) * | 2016-10-18 | 2023-09-11 | 삼성전자주식회사 | 전자 장치 및 그의 제어 방법 |
-
2018
- 2018-05-14 CN CN201810453847.3A patent/CN108528341B/zh active Active
-
2019
- 2019-04-02 US US16/758,132 patent/US11376961B2/en active Active
- 2019-04-02 WO PCT/CN2019/080960 patent/WO2019218789A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08160846A (ja) * | 1994-12-07 | 1996-06-21 | Mitsubishi Heavy Ind Ltd | 航空機用訓練支援装置 |
US20110183301A1 (en) * | 2010-01-27 | 2011-07-28 | L-3 Communications Corporation | Method and system for single-pass rendering for off-axis view |
US9251715B2 (en) * | 2013-03-15 | 2016-02-02 | Honda Motor Co., Ltd. | Driver training system using heads-up display augmented reality graphics elements |
CN104504960A (zh) * | 2014-11-29 | 2015-04-08 | 江西洪都航空工业集团有限责任公司 | 一种虚拟平显在飞行训练器epx视景系统中的应用方法 |
CN107107834A (zh) * | 2014-12-22 | 2017-08-29 | 富士胶片株式会社 | 投影型显示装置、电子设备、驾驶者视觉辨认图像共享方法以及驾驶者视觉辨认图像共享程序 |
JP2017102331A (ja) * | 2015-12-03 | 2017-06-08 | トヨタ自動車株式会社 | 車両用ヘッドアップディスプレイの評価支援装置 |
CN108528341A (zh) * | 2018-05-14 | 2018-09-14 | 京东方科技集团股份有限公司 | 用于演示车载抬头显示装置的功能的方法 |
Also Published As
Publication number | Publication date |
---|---|
US11376961B2 (en) | 2022-07-05 |
CN108528341A (zh) | 2018-09-14 |
US20210260997A9 (en) | 2021-08-26 |
US20200282833A1 (en) | 2020-09-10 |
CN108528341B (zh) | 2020-12-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6304628B2 (ja) | 表示装置および表示方法 | |
WO2019218789A1 (zh) | 用于演示车载抬头显示装置的功能的方法、系统及计算机可读存储介质 | |
JP5962594B2 (ja) | 車載表示装置およびプログラム | |
WO2016067574A1 (ja) | 表示制御装置及び表示制御プログラム | |
US10817729B2 (en) | Dynamic driving metric output generation using computer vision methods | |
GB2548718B (en) | Virtual overlay system and method for displaying a representation of a road sign | |
US10789762B2 (en) | Method and apparatus for estimating parameter of virtual screen | |
JP2018022105A (ja) | ヘッドアップディスプレイ装置、表示制御方法及び制御プログラム | |
KR20180022374A (ko) | 운전석과 보조석의 차선표시 hud와 그 방법 | |
KR20230069893A (ko) | 컨텐츠를 표시하기 위한 장치 및 방법 | |
US20190187475A1 (en) | Multi-image head up display (hud) | |
KR20190067366A (ko) | 차량 및 그 제어 방법 | |
US11468591B2 (en) | Scene attribute annotation of complex road typographies | |
CN115119045A (zh) | 基于车载多摄像头的视频生成方法、装置及车载设备 | |
JP5697405B2 (ja) | 表示装置及び表示方法 | |
US20210160437A1 (en) | Image processing apparatus and image transformation method | |
JP6365361B2 (ja) | 情報表示装置 | |
JP6448806B2 (ja) | 表示制御装置、表示装置及び表示制御方法 | |
JP2020187396A (ja) | 矢印方向特定装置及び矢印方向特定方法 | |
CN113837064B (zh) | 道路识别方法、系统和可读存储介质 | |
KR102071720B1 (ko) | 차량용 레이다 목표 리스트와 비전 영상의 목표물 정합 방법 | |
CN111457936A (zh) | 辅助驾驶方法、辅助驾驶系统、计算设备及存储介质 | |
KR101637298B1 (ko) | 증강 현실을 이용한 차량용 헤드 업 디스플레이 장치 | |
JP6481596B2 (ja) | 車両用ヘッドアップディスプレイの評価支援装置 | |
JP2019172070A (ja) | 情報処理装置、移動体、情報処理方法、及びプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19804012 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19804012 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 10/05/2021) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19804012 Country of ref document: EP Kind code of ref document: A1 |