WO2024051691A1 - 一种调节方法、装置和运载工具 - Google Patents

一种调节方法、装置和运载工具 Download PDF

Info

Publication number
WO2024051691A1
WO2024051691A1 PCT/CN2023/117021 CN2023117021W WO2024051691A1 WO 2024051691 A1 WO2024051691 A1 WO 2024051691A1 CN 2023117021 W CN2023117021 W CN 2023117021W WO 2024051691 A1 WO2024051691 A1 WO 2024051691A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
head
display device
eye position
position information
Prior art date
Application number
PCT/CN2023/117021
Other languages
English (en)
French (fr)
Inventor
查敬芳
杨京寰
张代齐
宋宪玺
吴钢
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024051691A1 publication Critical patent/WO2024051691A1/zh

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60NSEATS SPECIALLY ADAPTED FOR VEHICLES; VEHICLE PASSENGER ACCOMMODATION NOT OTHERWISE PROVIDED FOR
    • B60N2/00Seats specially adapted for vehicles; Arrangement or mounting of seats in vehicles
    • B60N2/02Seats specially adapted for vehicles; Arrangement or mounting of seats in vehicles the seat or part thereof being movable, e.g. adjustable
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0149Head-up displays characterised by mechanical features
    • G02B2027/0161Head-up displays characterised by mechanical features characterised by the relative positioning of the constitutive elements
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0149Head-up displays characterised by mechanical features
    • G02B2027/0161Head-up displays characterised by mechanical features characterised by the relative positioning of the constitutive elements
    • G02B2027/0163Electric or electronic control thereof

Definitions

  • Embodiments of the present application relate to the field of smart cockpits, and more specifically, to an adjustment method, device and vehicle.
  • HUD head-up displays
  • Embodiments of the present application provide an adjustment method, a device and a delivery vehicle, which help to improve the intelligence of the delivery vehicle and also help to improve the user experience.
  • Vehicles in this application may include road vehicles, water vehicles, air vehicles, industrial equipment, agricultural equipment, or entertainment equipment, etc.
  • the carrier can be a vehicle, which is a vehicle in a broad sense, and can be a means of transportation (such as a commercial vehicle, a passenger car, a motorcycle, an aircraft, a train, etc.), an industrial vehicle (such as a forklift, a trailer, a tractor, etc.) etc.), engineering vehicles (such as excavators, bulldozers, cranes, etc.), agricultural equipment (such as lawn mowers, harvesters, etc.), amusement equipment, toy vehicles, etc.
  • the embodiments of this application do not specifically limit the types of vehicles.
  • the vehicle may be an airplane, a ship, or other means of transportation.
  • an adjustment method is provided, which method can be applied to the cockpit of a vehicle.
  • the cockpit includes a first area, and a head-up display device is provided in the cockpit.
  • the head-up display device is used to display images in the first area.
  • the user displays information.
  • the method includes: when detecting the presence of a user in the first area, obtaining an image collected by a camera in the cockpit; determining the eye position information of the user based on the image; and detecting the user in the third area.
  • the head-up display device is adjusted according to the eye position information.
  • an image can be obtained through a camera in the cockpit and the user's eye position information can be determined from the image.
  • the head-up display device can be adjusted according to the eye position information. In this way, the user does not need to manually adjust the head-up display device, which reduces the user's learning cost when adjusting the head-up display device. It also avoids tedious operations when the user adjusts the head-up display device, which helps to improve the intelligence of the vehicle and also helps Improve user experience.
  • acquiring images captured by a camera in the cockpit includes: when a user is detected to be present in the first area, acquiring images captured by a camera in the cockpit image of this user.
  • the eye position information may be one or more eye position information.
  • the eye position information can be eye position information determined based on images acquired at a certain time, or it can also be eye position information determined based on images acquired within a certain time period, or it can also be It is multiple eye position information determined based on images acquired within a certain period of time.
  • the head-up display device can be adjusted according to the eye position information; or, when the eye position information is multiple eye position information, the head-up display device can be adjusted.
  • the head-up display device is adjusted according to the plurality of eye position information.
  • the eye position information may include the user's eye position.
  • taking the vehicle as an example when a user is detected to be present in the first area, acquiring images collected by the camera in the cockpit includes: when a user is detected to be present in the first area, When the vehicle is in park and the head-up display is on, images collected by the camera in the cockpit are acquired.
  • detecting that a user exists in the first area includes: detecting that a user never exists in the first area to existing users.
  • adjusting the head-up display device according to the eye position information includes: when detecting the user's first operation, the head-up display device is adjusted according to the eye position information. When the brake pedal is depressed, the head-up display device is adjusted based on the eye position information.
  • the adjustment of the head-up display device is combined with the habitual operation of the user, and when it is detected that the user depresses the brake pedal, the user's eyes
  • the position information is used to adjust the head-up display device, so that the adjustment of the head-up display device is completed without the user being aware of it. In this way, the user's learning cost in adjusting the head-up display device is reduced, and the user's tedious operations in adjusting the head-up display device are avoided, which helps to improve the user experience.
  • adjusting the head-up display device according to the eye position information includes: when detecting the user's first operation, the head-up display device is adjusted according to the eye position information. When adjusting the gear from the parking gear to other gears, the head-up display device is adjusted based on the eye position information.
  • the adjustment of the head-up display device is combined with the habitual operation of the user, and when the user's operation of shifting the gear is detected, the user's eye position information is The head-up display device is adjusted, so that the adjustment of the head-up display device is completed in a user-insensitive state. In this way, the user's learning cost in adjusting the head-up display device is reduced, and the user's tedious operations in adjusting the head-up display device are avoided, which helps to improve the user experience.
  • detecting the user's operation of pressing the brake pedal includes: detecting that the opening of the brake pedal is greater than or equal to a preset threshold.
  • detecting the user's operation of adjusting the gear from the parking gear to other gears includes: detecting the user's operation of adjusting the gear from the parking gear to the forward gear.
  • determining the eye position information of the user based on the image includes: determining the eye position information based on the image acquired within a first preset time period, Wherein, the end time of the first preset duration is the time when the first operation is detected.
  • the eye position information is used to adjust the head-up display device. In this way, the accuracy of the eye position information can be improved, thereby helping to improve the accuracy of the adjustment of the head-up display device, so that the height of the virtual image presented by the head-up display device is more in line with the user's habits.
  • determining the user's eye position information based on the image includes: determining the eye position information based on the image acquired when the first operation is detected.
  • the first operation can be a preset gesture for the user, or the first operation can also be a voice instruction for the user, and the voice instruction is used to instruct the head-up display device to be adjusted, or,
  • the first operation can also be the user clicking a certain control on the vehicle display screen, or the first operation can also be the user pressing a button on the steering wheel, or the first operation can also be the user pressing a lever. operate.
  • the method further includes: when detecting the user's first operation, controlling the head-up display device to display prompt information, the prompt information being used to prompt the user to perform the operation.
  • the heads-up display adjusts.
  • the head-up display device when the user's first operation is detected, the head-up display device can also be controlled to display prompt information. In this way, the user can clearly know that the head-up display device is currently being adjusted, which helps to improve the user's experience.
  • the method further includes: stopping acquiring the image when the adjustment of the head-up display device is completed.
  • acquisition of the image may be stopped when the adjustment of the head-up display device is completed. In this way, it helps to save the computing resources of the vehicle and avoid the waste of resources caused by the constant adjustment of the head-up display device.
  • stopping to acquire the image when the adjustment of the head-up display device is completed can also be understood as stopping acquiring the image of the user when the adjustment of the head-up display device is completed; or, it can also be understood that The purpose is to stop adjusting the head-up display device when the adjustment of the head-up display device is completed.
  • acquiring the image collected by the camera in the cockpit includes: when the user is detected to be present in the first area And when the user's operation of adjusting the seat in the first area is detected, the image is obtained; when the user's first operation is detected, the head-up display device is displayed based on the eye position information. Adjusting includes: adjusting the head-up display device according to changes in the eye position information when detecting that the user adjusts the seat.
  • the vehicle when the user adjusts the seat, the vehicle can adjust the head-up display device in real time according to changes in the user's eye position information. In this way, the process of adjusting the seat is combined with the process of adjusting the head-up display device.
  • the user can complete the adjustment of the head-up display device during the process of adjusting the seat, which avoids the tedious operations in the process of adjusting the head-up display device and helps the user to adjust the head-up display device. It not only improves the intelligence of delivery vehicles, but also helps improve user experience.
  • the method further includes: when detecting that the user adjusts the seat, controlling the head-up display device to display prompt information, the prompt information being used to prompt the user to adjust the seat. The user is making adjustments to the heads-up display.
  • the head-up display device when it is detected that the user adjusts the seat, the head-up display device can be controlled to display prompt information.
  • the user can clearly know that the head-up display device is currently being adjusted and the height of the virtual image presented by the corresponding head-up display device in different seat states, which facilitates the user to complete the adjustment of the head-up display device during the seat adjustment process, and has Help improve user experience.
  • the method further includes: stopping the adjustment of the head-up display device when it is detected that the user stops adjusting the seat.
  • the adjustment of the head-up display device can be stopped when the adjustment of the seat is completed. In this way, it helps to save the computing resources of the vehicle and avoid the waste of resources caused by the constant adjustment of the head-up display device.
  • the method further includes: stopping acquiring the image when it is detected that the user stops adjusting the seat.
  • the method further includes: within a second preset time period after detecting that the user stops adjusting the seat, based on changes in the eye position information , to adjust the head-up display device.
  • a period of time can be reserved for the user to continue adjusting the user's sitting posture.
  • the user can also continue to adjust the head-up display device according to changes in the user's eye position information. Make real-time adjustments. In this way, when the posture of the seat and the posture of the body meet the user's habits, the adjustment of the head-up display device is completed, which helps to improve the user's experience.
  • the method further includes: stopping the adjustment of the head-up display device at the end of the second preset time period.
  • the adjustment of the head-up display device may be stopped at the end of the second preset time period. In this way, it helps to save the computing resources of the vehicle and avoid the waste of resources caused by the constant adjustment of the head-up display device.
  • determining the user's eye position information based on the image includes: filtering interference data in the image; based on the image after filtering the interference data, Determine this eye position information.
  • interference data in the image can be filtered out when determining the user's eye position information, which helps to improve the accuracy of the eye position information, thereby helping to improve the accurate determination of the adjustment of the head-up display device. .
  • adjusting the head-up display device according to the eye position information includes: determining the height of the virtual image presented by the head-up display device according to the eye position information. ; According to the height, the head-up display device is adjusted.
  • determining the height at which the head-up display device presents a virtual image can also be understood as determining the projection height of the head-up display device.
  • the vehicle is a vehicle
  • the first area includes a main driving area or a passenger driving area.
  • an adjustment device is provided.
  • the device is used to adjust a head-up display device in a vehicle cabin.
  • the cabin includes a first area, and the head-up display device is used to display information to a user in the first area.
  • the device includes: a detection unit, used to detect whether a user exists in the first area; an acquisition unit, used to acquire images collected by the camera in the cockpit when the detection unit detects that a user exists in the first area; a determination unit, and an adjustment unit configured to adjust the head-up display device according to the eye position information when the detection unit detects the user's first operation.
  • the adjustment unit is configured to: when the detection unit detects the user's operation of stepping on the brake pedal, adjust the head-up display device according to the eye position information. Make adjustments.
  • the adjustment unit is configured to: when the detection unit detects the user's operation of adjusting the gear from the parking gear to other gears, according to the eye
  • the head-up display device is adjusted according to the location information of the head.
  • the determining unit is configured to: determine the eye position information based on the image acquired within a first preset time period, wherein the first preset time period The end time of is the time when the first operation is detected.
  • the device further includes: a control unit configured to control the head-up display device to display prompt information when the detection unit detects the user's first operation, the The prompt information is used to remind the user that the head-up display device is being adjusted.
  • the acquisition unit is further configured to: stop acquiring the image when the adjustment of the head-up display device is completed.
  • the acquisition unit is configured to: detect that the user exists in the first area and detect that the user adjusts the position of the seat in the first area when the detection unit During operation, the image collected by the camera is acquired; the adjustment unit is used to adjust the head-up display device according to changes in the eye position information when the detection unit detects that the user adjusts the seat. .
  • the device further includes: a control unit configured to control the head-up display device to display prompt information when the detection unit detects that the user adjusts the seat. , the prompt information is used to prompt the user that the head-up display device is being adjusted.
  • the adjustment unit is further configured to: stop adjusting the head-up display device when the detection unit detects that the user stops adjusting the seat.
  • the adjustment unit is also configured to: within a second preset time period after the detection unit detects that the user stops adjusting the seat, according to the The head-up display device is adjusted according to changes in eye position information.
  • the adjustment unit is further configured to: stop adjusting the head-up display device when the second preset time period ends.
  • the determination unit is configured to: filter interference data in the image; and determine the eye position information based on the image after filtering the interference data.
  • the determining unit is further configured to: determine the height of the virtual image presented by the head-up display device based on the eye position information; and the adjusting unit is configured to: based on the eye position information height to adjust the head-up display.
  • the vehicle is a vehicle
  • the first area includes a main driving area or a passenger driving area.
  • an adjustment device in a third aspect, includes a processing unit and a storage unit, wherein the storage unit is used to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the device executes any one of the first aspects. Possible ways.
  • an adjustment system in a fourth aspect, includes a head-up display device and a computing platform, wherein the computing platform includes any possible device in the second aspect or the third aspect.
  • a vehicle which includes any possible device in the second aspect or the third aspect, or includes the adjustment system described in the fourth aspect.
  • the vehicle is a vehicle.
  • a computer program product includes: computer program code.
  • the computer program code When the computer program code is run on a computer, it causes the computer to perform any of the possible methods in the first aspect.
  • the above computer program code can be stored in whole or in part on the first storage medium, where the first storage medium can be packaged together with the processor, or can be packaged separately from the processor. This is not the case in the embodiments of this application. Specific limitations.
  • a computer-readable medium stores program code.
  • the computer program code When the computer program code is run on a computer, it causes the computer to perform any of the possible methods in the first aspect. .
  • inventions of the present application provide a chip system.
  • the chip system includes a processor for calling a computer program or computer instructions stored in a memory, so that the processor executes the first aspect or the second aspect. Any way possible.
  • the processor is coupled to the memory through an interface.
  • the chip system further includes a memory, and a computer program or computer instructions are stored in the memory.
  • Embodiments of the present application provide an adjustment method, device and vehicle.
  • the head-up display device can be adjusted according to the eye position information, thereby reducing the user's learning cost when adjusting the head-up display device. , and also avoids users adjusting the The tedious operations in the process of eliminating the head-up display device will help to improve the intelligence of the vehicle and also help to improve the user experience.
  • the head-up display device is adjusted according to the user's eye position information when it is detected that the user has stepped on the brake pedal, so that the head-up display device is adjusted without the user being aware of it. Adjustment of display device.
  • the head-up display device can be adjusted based on the eye position information determined by the image during a period of time before the user's operation of depressing the brake pedal is detected, which can improve the user's
  • the accuracy of the eye position information helps to improve the accuracy of the head-up display device adjustment, making the height of the virtual image presented by the head-up display device more in line with the user's habits.
  • the user can clearly know that the head-up display device is currently being adjusted, which helps to improve the user's experience.
  • the acquisition of images can be stopped when the adjustment of the head-up display device is completed, which helps to save the computing resources of the vehicle and avoid the waste of resources caused by the process of adjusting the head-up display device.
  • the head-up display device can be adjusted in real time according to changes in the user's eye position information.
  • the process of adjusting the seat is combined with the process of adjusting the head-up display device.
  • the user can complete the adjustment of the head-up display device during the process of adjusting the seat. This avoids tedious operations when the user adjusts the head-up display device, which is helpful. Improving the intelligence of delivery vehicles will also help improve user experience.
  • a period of time can be reserved for the user to continue adjusting the user's sitting posture.
  • the head-up display device can also be adjusted in real time according to changes in the user's eye position information.
  • Adjustment of the head-up display device may be stopped at the end of the second preset time period. In this way, it helps to save the computing resources of the vehicle and avoid the waste of resources caused by the constant adjustment of the head-up display device.
  • Figure 1 is a schematic functional block diagram of a vehicle provided by an embodiment of the present application.
  • FIG. 2 is a schematic structural diagram of a windshield-type head-up display W-HUD system provided by an embodiment of the present application.
  • Figure 3 is a set of graphical user interface GUI provided by an embodiment of the present application.
  • Figure 4 is another set of GUI provided by the embodiment of the present application.
  • Figure 5 is another set of GUI provided by the embodiment of the present application.
  • Figure 6 is another set of GUI provided by the embodiment of the present application.
  • Figure 7 is a schematic flow chart of the adjustment method provided by the embodiment of the present application.
  • Figure 8 is another schematic flow chart of the adjustment method provided by the embodiment of the present application.
  • Figure 9 is another schematic flow chart of the adjustment method provided by the embodiment of the present application.
  • Figure 10 is a schematic block diagram of an adjustment device provided by an embodiment of the present application.
  • Figure 11 is a schematic block diagram of an adjustment system provided by an embodiment of the present application.
  • Prefixes such as “first” and “second” are used in the embodiments of this application only to distinguish different description objects, and have no limiting effect on the position, order, priority, quantity or content of the described objects.
  • the use of ordinal words and other prefixes used to distinguish the described objects does not limit the described objects.
  • Words constitute redundant restrictions.
  • plural means two or more.
  • FIG. 1 is a functional block diagram of a vehicle 100 provided by an embodiment of the present application.
  • Vehicle 100 may include a perception system 120 , a display device 130 , and a computing platform 150 , where perception system 120 may include one or more sensors that sense information about the environment surrounding vehicle 100 .
  • the sensing system 120 may include a positioning system, and the positioning system may be a global positioning system (GPS), Beidou system, or other positioning systems.
  • the sensing system 120 may also include one or more of an inertial measurement unit (IMU), laser radar, millimeter wave radar, ultrasonic radar, and camera device.
  • IMU inertial measurement unit
  • the computing platform 150 may include one or more processors, such as processors 151 to 15n (n is a positive integer).
  • the processor is a circuit with signal processing capabilities.
  • the processor may be a circuit with instructions. Circuits with read and run capabilities, such as central processing unit (CPU), microprocessor, graphics processing unit (GPU) (can be understood as a microprocessor), or digital signal processor (digital signal processor, DSP), etc.; in another implementation, the processor can achieve certain functions through the logical relationship of the hardware circuit. The logical relationship of the hardware circuit is fixed or can be reconstructed.
  • the processor is a dedicated integrated Hardware circuits implemented by application-specific integrated circuit (ASIC) or programmable logic device (PLD), such as field programmable gate array (FPGA).
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • FPGA field programmable gate array
  • the process of the processor loading the configuration file and realizing the hardware circuit configuration can be understood as the process of the processor loading instructions to realize the functions of some or all of the above units.
  • the processor can also be a hardware circuit designed for artificial intelligence, which can be understood as an ASIC, such as a neural network processing unit (NPU), tensor processing unit (TPU), depth Learning processing unit (deep learning processing unit, DPU), etc.
  • the computing platform 150 may also include a memory, which is used to store instructions. Some or all of the processors 151 to 15n may call instructions in the memory and execute the instructions to implement corresponding functions.
  • the display device 130 in the cockpit is mainly divided into two categories.
  • the first category is a vehicle-mounted display screen;
  • the second category is a projection display screen, such as a HUD.
  • the vehicle display screen is a physical display screen and an important part of the vehicle infotainment system.
  • There can be multiple displays in the cockpit such as digital instrument display, central control screen, passenger in the co-pilot seat (also known as The display in front of the front passenger), the display in front of the left rear passenger, the display in front of the right rear passenger, and even the car window can be used as a display.
  • Head-up display also known as head-up display system. It is mainly used to display driving information such as speed and navigation on the display device in front of the driver (such as the windshield).
  • HUD includes, for example, combined head-up display (combiner-HUD, C-HUD) system, windshield-type head-up display (windshield-HUD, W-HUD) system, and augmented reality head-up display system (augmented reality HUD, AR-HUD). It should be understood that HUD may also include other types of systems as technology evolves, and this application does not limit this.
  • FIG. 2 shows a schematic structural diagram of a W-HUD system provided by an embodiment of the present application.
  • the system includes a HUD body 210, an image generating unit (PGU) 220, a first reflection component 230 (for example, a plane mirror or a curved mirror), a second reflection component 240 (for example, a plane mirror or a curved surface) mirror), windshield 250.
  • the optical imaging principle of the W-HUD system is: the image generated by the PGU is reflected to the human eye through the first reflective component 230, the second reflective component 240 and the windshield 250. Because the optical system has the effect of increasing the optical path and amplification, the user can A virtual image with depth of field 260 is seen outside the cockpit.
  • W-HUD systems are widely used in vehicle assisted driving, such as vehicle speed, warning lights, navigation, and advanced driving assist system (ADAS).
  • ADAS advanced driving assist system
  • the height of their eyes may also be different.
  • user A gets on the car
  • the height 1 of the virtual image does not match the height of user B's eyes, causing user B to see the virtual image distorted or unable to see the virtual image. .
  • user B also needs to adjust the height of the virtual image position (for example, height 2) presented by the HUD through buttons or knobs, so that the height 2 of the virtual image matches the height of user B's eyes.
  • the height of the virtual image position for example, height 2
  • buttons or knobs for example, buttons or knobs
  • Embodiments of the present application provide an adjustment method, device and vehicle that determine the user's eye position information through images collected by a camera in the cockpit, thereby automatically adjusting the HUD based on the eye position information. In this way, the user does not need to manually adjust the HUD, which helps to improve the intelligence of the vehicle and improves the user experience.
  • the above eye position information may include one or more of eye position information, such as the position of the eye, the position of the area where the eye is located, or the direction of sight.
  • eye position information such as the position of the eye, the position of the area where the eye is located, or the direction of sight.
  • the area where the user's eyes are located may be a rectangular area, an elliptical area, a circular area, etc. Adjusting the projection position of HUD through eye position information can improve the efficiency and accuracy of HUD adjustment and improve user experience.
  • the above adjustment of the HUD may include adjusting the height of the virtual image presented by the HUD, or adjusting the height of the HUD projection, or adjusting the projection position of the HUD.
  • Figure 3 shows a set of graphical user interfaces provided by the embodiment of the present application.
  • GUI graphical user interface
  • the vehicle when it detects that the user is sitting in the seat in the main driving area, the vehicle can monitor the vehicle through the camera in the cockpit (for example, the driver monitor system (DMS)) or the cockpit monitoring system.
  • the camera of the system (cabin monitor system, CMS)) collects images, which include the user's face information.
  • CMS cockpit monitor system
  • the vehicle can determine the height of the virtual image presented by the HUD based on the eye position in the face information.
  • a GUI as shown in (b) of FIG. 3 may be displayed.
  • detecting that the user sits on a seat in the main driving area can also be understood as detecting that the user is present in the main driving area.
  • collecting images through a camera in the cockpit includes: collecting images of the user through a camera in the cockpit, or collecting images of the main driving area through a camera in the cockpit.
  • the vehicle when detecting that the user sits on the seat in the main driving area, can collect images through the camera in the cockpit, including: when detecting that the user sits on the seat in the main driving area, the vehicle is currently in park. When the vehicle is in P position and the HUD function is activated, images are collected through the camera in the cockpit.
  • the vehicle when detecting the user's operation of depressing the brake pedal, the vehicle can adjust the HUD according to the height of the virtual image presented by the HUD corresponding to the eye position. At the same time, the vehicle can display the adjustment frame 301 through the HUD, where the adjustment frame 301 includes the prompt message "Automatic height matching".
  • adjusting the HUD may include adjusting the first reflective component 230 and/or the second reflective component 240 .
  • the reflection angle of the first reflective component 230 and/or the second reflective component 240 can be adjusted to adjust the height of the virtual image presented by the HUD.
  • the adjustment of the HUD is combined with the habitual operation of the user, and when it is detected that the user depresses the brake pedal, the user's eyes
  • the HUD is adjusted according to the position, so that the adjustment of the HUD can be completed without the user being aware of it. In this way, the user's learning cost in the process of adjusting the HUD is reduced, and the user's tedious operations in the process of adjusting the HUD are avoided, which helps to improve the user experience.
  • the vehicle can determine the height of the virtual image presented by the HUD based on the eye position in the face information, including: determining the height of the virtual image presented by the HUD based on the eye position in the image collected within a first preset time period, the first The preset time period is the preset time period before the user's operation of depressing the brake pedal is detected.
  • the end time of the first preset time period is the time when the user's operation of depressing the brake pedal is detected.
  • the end time of the first preset duration is the time when it is detected that the opening of the brake pedal is greater than or equal to the preset opening threshold.
  • the first preset time length is 500 milliseconds (millisecond, ms).
  • the HUD can be adjusted based on the eye position determined by images taken during a period of time before the user's operation of depressing the brake pedal is detected. In this way, the accuracy of the user's eye position can be improved, thereby helping to improve the accuracy of HUD adjustment, making the height of the virtual image presented by the HUD more in line with the user's habits.
  • the vehicle may also adjust the HUD according to the height of the virtual image presented by the HUD corresponding to the eye position when detecting the user's operation of adjusting the gear from P to other gears.
  • detecting the user's operation of adjusting the gear position from P gear to other gears includes: detecting the user's operation of adjusting the gear position from P gear to forward gear (D gear).
  • the vehicle can also detect the height of the virtual image presented by the HUD corresponding to the eye position when it detects other operations of the user (for example, lever operation, click confirmation operation on the vehicle display screen, or voice command, etc.) , adjust the HUD.
  • other operations of the user for example, lever operation, click confirmation operation on the vehicle display screen, or voice command, etc.
  • the prompt message "Height Already Matched” may be displayed in the adjustment box 301.
  • the vehicle can display navigation information at the height through the HUD.
  • the current vehicle speed is 0 kilometers/hour (Km /h), drive forward 800 meters and turn right.
  • the vehicle when detecting the user's operation of adjusting the gear from P to other gears, the vehicle can stop acquiring images through the camera in the cockpit, and then stop determining the height of the virtual image presented by the HUD based on the user's eye position. In this way, the waste of resources caused by constantly adjusting the HUD based on the user's eye position can be avoided.
  • the vehicle when detecting the user's operation of depressing the brake pedal, the vehicle can adjust the HUD according to the height of the virtual image presented by the HUD corresponding to the eye position.
  • the adjustment of the HUD can be stopped.
  • the vehicle can automatically adjust the height of the virtual image presented by the HUD without the user being aware of it. Adjust to a height that is consistent with the user's habits (or within the user's best field of view). In this way, the user does not need to manually adjust the height of the virtual image presented by the HUD, which helps improve the automation of the vehicle and improves the user experience.
  • FIG. 4 shows a set of graphical user interfaces GUI provided by the embodiment of the present application.
  • the vehicle when detecting the user's operation of depressing the brake pedal, the vehicle can control the HUD to display the image to be displayed (for example, navigation information, instrument information, etc.) in the adjustment frame 301 .
  • the adjustment box 301 can be automatically hidden.
  • the vehicle when the user's operation of pressing the brake pedal is detected, the vehicle can adjust the HUD according to the height of the virtual image presented by the HUD. At the same time, the vehicle can display the adjustment frame 301 through the HUD, where the adjustment frame 301 includes navigation information.
  • the vehicle can control the HUD to hide the adjustment frame 301 .
  • the adjustment frame 301 disappears, the user can determine that the vehicle has completed adjusting the height of the virtual image presented by the HUD.
  • FIG. 5 shows a set of graphical user interfaces GUI provided by the embodiment of the present application.
  • the vehicle may not collect images through the camera in the cockpit.
  • the vehicle when detecting the user's operation to adjust the seat (for example, adjusting the seat forward), the vehicle can determine the user's position in the main driving area through the images collected by the camera in the cockpit. Eye position. When the user adjusts the seat, the vehicle can also adjust the height of the virtual image presented by the HUD in real time based on changes in eye position. At the same time, the vehicle can also control the HUD to display a prompt box 301, which includes the prompt message "Automatically matching altitude.”
  • the vehicle when detecting the user's operation to adjust the seat, can collect images through the camera in the cockpit, including: when detecting the user adjusting the seat in the main driving area, the vehicle is currently in P position and the HUD function When started, the vehicle can collect images through the camera in the cockpit.
  • the eye position is determined to be at position 1 through the image collected by the camera, and the height of the virtual image presented by the HUD matching position 1 is height 1.
  • the vehicle can adjust the HUD according to height 1.
  • the user's operation of adjusting the seat includes adjusting the front and back position of the seat, adjusting the height of the seat, or adjusting the angle of the seat back, etc.
  • the vehicle can continue to obtain the user's image in the main driving area through the images collected by the camera. Eye position.
  • the vehicle can also control the HUD to display a prompt box 301, which includes the prompt message "Automatically matching altitude.”
  • the image collected by the camera determines that the eye position is at position 2, and the height of the virtual image presented by the HUD matching position 2 is height 2.
  • the vehicle can adjust the HUD according to height 2.
  • the vehicle when it is detected that the user stops adjusting the seat, the vehicle can stop adjusting the HUD. At this time, the adjustment box can be displayed through the HUD and the prompt message "Height Already Matched" is displayed in the adjustment box. In this way, the user can know that the vehicle has completed the adjustment of the HUD.
  • the vehicle may continue to adjust the HUD according to changes in the user's eye position. If the user's operation to adjust the seat is not detected again within the preset time period after the user stops adjusting the seat, then at the end of the preset time period, the vehicle can stop adjusting the HUD.
  • the preset time length is 1 second (second, s).
  • the buttons for adjusting the front and rear position and height of the seat are different, there may be a certain time interval between the user adjusting the front and rear position and height of the seat.
  • the HUD can continue to be adjusted within the preset time period after detecting that the user has stopped adjusting the seat. In this way, when there is an interval between adjusting the front and back position of the seat and adjusting the height of the seat, the HUD can be adjusted uninterrupted, allowing the user to experience uninterrupted HUD during the two interval seat adjustment processes. The adjustment process helps improve the user experience.
  • the above vehicle stops adjusting the HUD can also be understood as the vehicle stops acquiring images in the cockpit through the camera, or the vehicle stops determining the user's eye position in the main driving area.
  • the vehicle can display navigation information through the HUD.
  • the current vehicle speed is 0 kilometers/hour (Km/h), Go forward 800 meters and turn right.
  • the vehicle when the user adjusts the seat, the vehicle can adjust the height of the virtual image presented by the HUD in real time according to changes in the user's eye position.
  • the user feels that the height of the virtual image presented by the HUD has satisfied the user's habits, the user can stop viewing the virtual image. Adjustment of the seat so that the vehicle can stop adjusting the HUD.
  • the process of adjusting the seat is combined with the process of adjusting the HUD.
  • the user can decide the height of the virtual image presented by the HUD, which helps to improve the user experience.
  • the above figure 5 illustrates the real-time adjustment of the HUD according to changes in eye position when the user adjusts the seat.
  • the vehicle can adjust the HUD in real time based on changes in the user's eye position.
  • the vehicle can control the HUD to display countdown information (for example, a 5-second countdown). During the countdown process, the user can adjust his or her sitting posture and/or the sitting posture of the seat.
  • the vehicle can adjust the height of the virtual image presented by the HUD to a height that matches the user's eye position based on the user's eye position at the end of the countdown.
  • the user can also control the vehicle's driving normally by applying the brakes, shifting gears, and other operations. In this way, the adjustment of the HUD is combined with the user's habitual operation, which reduces the user's learning cost in the process of adjusting the HUD, avoids the user's tedious operations in the process of adjusting the HUD, and helps improve the user experience.
  • the HUD can be adjusted in real time based on the user's eye position.
  • the user's preset gesture for example, OK gesture
  • the adjustment of the HUD is stopped.
  • the HUD when it is detected that the user is sitting on the seat, the HUD can be adjusted in real time based on the user's eye position.
  • the user's preset gesture for example, OK gesture
  • the adjustment of the HUD is stopped.
  • the vehicle when it is detected that the user stops adjusting the seat, the vehicle can stop adjusting the HUD.
  • the embodiments of the present application are not limited to this. For example, considering that the user's sitting posture when stopping adjusting the seat is not in line with the user's habits, time can also be reserved for the user to adjust the sitting posture. As the user adjusts their sitting posture, the vehicle can continue to adjust the HUD in real time.
  • FIG. 6 shows a set of graphical user interfaces GUI provided by the embodiment of the present application.
  • the vehicle when it is detected that the user stops adjusting the seat, the vehicle can display a prompt box 301, the prompt message "Stop adjusting after the countdown is over" and the countdown information through the HUD.
  • the vehicle may reserve 3 seconds for the user to adjust the sitting posture. During these 3 seconds, the vehicle can also determine the change in the user's eye position based on the images collected by the camera, thereby adjusting the height of the virtual image presented by the HUD in real time based on the change in eye position.
  • the vehicle can stop adjusting the HUD.
  • the vehicle can display the prompt box 301 and the prompt message "height has been matched" through the HUD.
  • a period of time can be reserved for the user to continue adjusting the user's sitting posture.
  • the vehicle can also adjust the HUD in real time according to changes in the user's eye position.
  • the vehicle can stop adjusting the HUD. In this way, when the posture of the seat and body posture are in line with the user's habits, the adjustment of the HUD is completed, which helps to improve the user's experience.
  • the GUI shown in FIG. 6 above uses the countdown end time as the time to stop adjusting the HUD.
  • the embodiment of the present application is not limited to this.
  • the adjustment of the HUD may be stopped.
  • the length of the countdown may be set when the vehicle leaves the factory, or may be set by the user.
  • Figure 7 shows a schematic flow chart of the adjustment method 700 provided by the embodiment of the present application. As shown in Figure 7, the method includes:
  • the computing platform obtains the data collected by the seat sensor in the first area.
  • the first area may be a main driver area or a passenger area.
  • the data collected by the seat sensor is used to indicate whether the user sits on the seat, or the data collected by the seat sensor is used to determine whether there is a user in the first area.
  • the seat sensor may be a seat sensor in the main driving area or a seat sensor in the passenger area.
  • the seat sensor may be a pressure sensor under the seat or a sensor that detects the status of the seat belt.
  • the computing platform can determine whether the user is sitting on the seat in the main driving area based on changes in the pressure value detected by the pressure sensor under the seat in the main driving area.
  • the computing platform obtains the data collected by the gear sensor.
  • the data collected by the gear position sensor is used to indicate the current gear position of the vehicle, for example, the current vehicle is in P gear.
  • the computing platform obtains information about whether the HUD is turned on.
  • the HUD can send information that the HUD has been started to the computing platform.
  • the computing platform can obtain images collected by the camera in the cockpit.
  • the computing platform can control the DMS in the cockpit to collect images.
  • the computing platform can control the DMS to collect images of the main driving area, or control the DMS to collect images of the user.
  • S705 The computing platform obtains the face information in the image.
  • the computing platform can obtain the face information in the image according to the image segmentation algorithm.
  • S706 The computing platform filters interference data based on the face information.
  • interference data there may be some interference data in the images collected by the camera. For example, images collected when the user lowers his head to look at the brakes, changes shoes, looks at the rearview mirror on the left or right side, or looks up at the rearview mirror in the car.
  • HUD it is all interference data and can be filtered.
  • the computing platform determines the user's eye position information based on the face information after filtering the interference data.
  • the computing platform can determine the user's eye position information based on valid data (or data other than interference data in the face information) obtained within a preset time period (for example, 3 seconds).
  • the eye position information can be converted into the same coordinate system as the HUD through the transformation matrix, so that the HUD gear information corresponding to the eye position information can be determined.
  • the eye position information includes the eye position, the position of the area where the eyes are located, or the line of sight direction.
  • gear position information of the HUD may be the height of the virtual image presented by the HUD.
  • the computing platform determines the HUD gear information based on the eye position information.
  • the computing platform may determine the gear position information of the HUD based on the eye position information and the mapping relationship between the eye position information and the gear position information of the HUD.
  • the eye position information may be the eye position
  • the computing platform may determine the gear position information of the HUD based on the eye position and the mapping relationship between the eye position and the gear position information of the HUD.
  • the computing platform obtains the data collected by the brake pedal sensor.
  • the data collected by the brake pedal sensor is used to indicate the opening of the brake pedal.
  • the computing platform can adjust the HUD based on the HUD's gear information.
  • the time period from detecting that the user sits on the seat to detecting that the opening of the brake pedal is greater than or equal to the preset threshold is ⁇ t.
  • the computing platform can determine the user’s eye position information based on the valid data in the face information. For example, within ⁇ t, N pieces of eye position information are obtained through valid data in face information. After performing a weighted average of the N pieces of eye position information, the final user's eye position information can be obtained. Therefore, the computing platform can adjust the HUD based on the finally obtained eye position information.
  • the computing platform can obtain the preset time length before time t 1 (for example, , 500ms), and determine the end user’s eye position information based on the face information within the preset time period. For example, if the face information within 500ms before time t 1 does not include interference data, then the computing platform can determine the user's eye position information based on the face information within 500ms.
  • the end time of the preset duration is time t 1 ; or, the end time of the preset duration is time t 2 , where time t 2 is a time before time t 1 .
  • the computing platform can also control the HUD to display prompt information, and the prompt information is used to prompt the user that the HUD is being adjusted.
  • the prompt information can be automatically hidden.
  • the HUD can be controlled to display the image to be displayed, for example, navigation information or instrument information.
  • the vehicle can automatically adjust the height of the image presented by the HUD to a height that conforms to the user's habits without the user being aware of it. In this way, the user does not need to manually adjust the height of the image presented by the HUD, which helps improve the automation of the vehicle and improves the user experience.
  • FIG 8 shows a schematic flow chart of the adjustment method 800 provided by the embodiment of the present application. As shown in Figure 8, the method includes:
  • the computing platform obtains changes in seat status.
  • the change in the seat state is used to indicate a change in seat height, a change in the seat back angle, or a change in the front and rear position of the seat.
  • the computing platform obtains the data collected by the gear sensor.
  • the computing platform obtains information about whether the HUD is turned on.
  • detecting a change in the seat state includes detecting one or more changes in the height of the seat, the angle of the seat back, or the front-to-back position of the seat.
  • S805 The computing platform obtains the face information in the image.
  • S806 The computing platform filters interference data based on the face information.
  • the computing platform determines the user's eye position information based on the face information after filtering the interference data.
  • the computing platform determines the HUD gear information based on the eye position information.
  • the computing platform adjusts the HUD based on the HUD's gear information.
  • the computing platform can repeatedly execute the above S804-S809, so that the HUD can be adjusted in real time according to changes in eye position information in different seat states.
  • the computing platform can periodically acquire images collected by the camera.
  • the computing platform can determine the user's eye position as position 1 based on the face information in the image acquired during the cycle, so that the computing platform can adjust the height of the virtual image presented by the HUD to Height 1, where there is a corresponding relationship between position 1 and height 1.
  • the computing platform can determine the user's eye position as position 2 based on the face information in the image obtained during the cycle, so that the computing platform can adjust the height of the virtual image presented by the HUD based on position 2. to height 2, where there is a corresponding relationship between position 2 and height 2.
  • the computing platform can adjust the HUD in real time in different cycles.
  • the computing platform can also control the HUD to display prompt information, and the prompt information is used to prompt the user that the HUD is being adjusted.
  • the computing platform when detecting that the user stops adjusting the seat, the computing platform can stop adjusting the HUD.
  • the computing platform can continue to adjust the HUD in real time based on changes in eye position information. At the end of the preset time period, the computing platform can stop adjusting the HUD.
  • FIG. 9 shows a schematic flow chart of the adjustment method 900 provided by the embodiment of the present application.
  • the method 900 may be executed by a vehicle (eg, a vehicle), or the method 900 may be executed by the above-mentioned computing platform, or the method 900 may be executed by a system composed of a computing platform and a head-up display device, or the method 900 may The method 900 may be executed by a system-on-a-chip (SOC) in the above computing platform, or the method 900 may be executed by a processor in the computing platform.
  • the method 900 can be applied to the cockpit of a vehicle.
  • the cockpit includes a first area.
  • a head-up display device is provided in the cockpit.
  • the head-up display device is used to display information to users in the first area.
  • the method 900 includes:
  • obtaining the image collected by the camera in the cockpit includes: when a user is detected to be present in the first area, the vehicle is in park gear and the head-up display device is turned on, obtaining The image was captured by a camera in the cockpit.
  • the first area may be a main driving area in the vehicle. This image can be acquired when it is detected that the user is in the main driving area, the vehicle is in park and the head-up display is on.
  • obtaining the image collected by the camera in the cockpit includes: obtaining an image of the user, or obtaining an image including the entire first area, or obtaining an image including part of the first area. .
  • detecting that a user exists in the first area includes: detecting that a user never exists in the first area to existing users.
  • acquiring images collected by the camera in the cockpit includes: when it is detected that the user in the first area adjusts the seat, the vehicle is in park gear and the head-up display device When turned on, the image collected by the camera in the cockpit is acquired.
  • the first area may be a main driving area in the vehicle. This image can be obtained when the user's operation of adjusting the seat in the main driving area is detected, the vehicle is in park gear and the head-up display device is turned on.
  • the camera is a DMS or CMS camera.
  • S920 Determine the user's eye position information based on the image.
  • the user's eye position information can be obtained from the image.
  • the eye position information can be converted into the same coordinate system as the HUD through the transformation matrix, so that the height of the virtual image presented by the HUD corresponding to the eye position information can be determined.
  • the eye position information may be one or more eye position information.
  • the eye position information can be eye position information determined based on images acquired at a certain time, or it can also be eye position information determined based on images acquired within a certain time period, or it can also be It is multiple eye position information determined based on images acquired within a certain period of time.
  • the head-up display device can be adjusted according to the eye position information; or, when the eye position information is multiple eye position information, the head-up display device can be adjusted.
  • the head-up display device is adjusted according to the plurality of eye position information.
  • the example of collecting images through the camera in the cockpit and determining the eye position information based on the images is used as an example.
  • the embodiments of the present application are not limited to this.
  • the user's eye position information can also be determined from data collected by other sensors in the cockpit.
  • the user's eye position information can be determined through the point cloud data collected by the lidar in the cockpit; and for example, the user's eye position information can be determined through the point cloud data collected by the millimeter wave radar in the cockpit.
  • adjusting the head-up display device according to the eye position information when the user's first operation is detected includes: when detecting the user's operation of stepping on the brake pedal, adjusting the head-up display device according to the eye position information when the user's operation of stepping on the brake pedal is detected. information to adjust the heads-up display.
  • the vehicle when detecting the user's operation of depressing the brake pedal, the vehicle can adjust the HUD according to the user's eye position.
  • adjusting the head-up display device according to the eye position information includes: detecting that the user adjusts the gear from the parking gear to other gears. During operation, the head-up display device is adjusted according to the eye position information.
  • adjusting the head-up display device according to the eye position information includes: when detecting that the user adjusts the gear.
  • the head-up display device is adjusted based on the eye position information.
  • determining the user's eye position information based on the image includes: determining the eye position information based on the image acquired within a first preset time period, wherein the end time of the first preset time period is The moment this first operation is detected.
  • determining the user's eye position information based on the image includes: determining the eye position information based on an image acquired when the user's operation of depressing the brake pedal is detected.
  • the method 900 further includes: when detecting the user's first operation, controlling the head-up display device to display prompt information, the prompt information being used to prompt the user that the head-up display device is adjusting.
  • the head-up display device when detecting the user's operation of depressing the brake pedal, the head-up display device can be controlled to display the adjustment box 301, wherein the adjustment box 301 displays the information "Automatically matching height.” ". After seeing the adjustment box 301 and the information in the adjustment box 301, the user can learn that the vehicle is currently automatically adjusting the HUD.
  • the head-up display device can be controlled to display the adjustment box 301 , where the navigation information is displayed in the adjustment box 301 .
  • the user can learn that the vehicle is currently automatically adjusting the HUD.
  • the method 900 further includes: stopping acquiring the image when the adjustment of the head-up display device is completed.
  • the above stop acquiring the image can also be understood as stopping acquiring the user's image, or it can also be understood as stopping acquiring the first image.
  • acquiring the image collected by the camera in the cockpit includes: detecting that the user exists in the first area and detecting that the user adjusts the seat in the first area.
  • the image collected by the camera is obtained; when the user's first operation is detected, the head-up display device is adjusted according to the eye position information, including: when it is detected that the user adjusts the seat During the process of moving the chair, the head-up display device is adjusted according to changes in the eye position information.
  • the vehicle when detecting that the user adjusts the position of the seat forward, the vehicle can adjust the HUD in real time based on changes in eye position.
  • the above real-time adjustment of the HUD can also be understood as adjusting the HUD based on the eye position information determined at different times during the process of adjusting the seat, or, during the process of adjusting the seat, based on changes in the eye position information. , adjust the HUD.
  • the eye position information is the eye position
  • the user's eye position is determined to be position 1 based on the face information in the acquired image, so that the user's eye position can be determined according to This position 1 adjusts the height of the virtual image presented by the HUD to height 1, where there is a corresponding relationship between position 1 and height 1.
  • the user's eye position is determined to be position 2 based on the face information in the acquired image, so that the height of the virtual image presented by the HUD can be adjusted based on position 2. to height 2, where there is a corresponding relationship between position 2 and height 2.
  • the method 900 further includes: when detecting that the user is adjusting the seat, controlling the head-up display device to display prompt information, the prompt information being used to prompt the user that the head-up display device is adjusting.
  • the HUD can be controlled to display the adjustment box 301, where the adjustment box 301 includes the information "Automatically matching height. ". After seeing the adjustment box 301, the user can learn that the vehicle is currently automatically adjusting the HUD.
  • the method 900 further includes: stopping the adjustment of the head-up display device when it is detected that the user stops adjusting the seat.
  • the above stop of adjusting the head-up display device can also be understood as stopping of acquiring images, or it can also be understood as stopping of acquiring images of the user, or it can also be understood as stopping of acquiring images of the first area.
  • the method 900 further includes: adjusting the head-up display device according to changes in the eye position information within a second preset time period after detecting that the user stops adjusting the seat.
  • the vehicle can also adjust the head-up display device according to changes in the user's eye position. During this 3s period, the user can also adjust his or her sitting posture so that the final height of the virtual image presented by the HUD meets the user's habits.
  • the method 900 further includes: stopping the adjustment of the head-up display device at the end of the second preset time period.
  • the adjustment of the HUD may be stopped.
  • the vehicle can adjust the HUD based on the user's eye position at the end of the 3-second countdown as the final eye position. Stop adjusting the HUD after the adjustment is completed, or stop collecting images after the adjustment is completed.
  • determining the user's eye position information based on the image includes: filtering interference data in the image; determining the eye position information based on the image after filtering the interference data.
  • the interference data includes but is not limited to images collected when the user lowers his head to look at the brakes, changes shoes, looks at the left or right rearview mirror, or looks up at the rearview mirror in the car.
  • adjusting the head-up display device according to the eye position information includes: determining a height at which the head-up display device presents a virtual image according to the eye position information; and adjusting the head-up display device according to the height. .
  • the vehicle is a vehicle
  • the first area includes a main driving area or a passenger driving area.
  • Embodiments of the present application also provide a device for implementing any of the above methods.
  • a device is provided that includes a vehicle (for example, a vehicle) used to implement any of the above methods, or a computing platform in the vehicle, or , the SOC in the computing platform, or the unit (or means) of each step executed by the processor in the computing platform.
  • FIG 10 shows a schematic block diagram of an adjustment device 1000 provided by an embodiment of the present application.
  • the device 1000 is used to adjust a head-up display device in a vehicle cabin, which includes a first area, and the head-up display device is used to display information to a user in the first area.
  • the device 1000 includes: a detection unit 1010, used to detect whether a user exists in the first area; an acquisition unit 1020, used to acquire the cockpit when the detection unit 1010 detects that a user exists in the first area.
  • the image collected by the camera in the camera; the determination unit 1030 is used to determine the user's eye position information according to the image; the adjustment unit 1040 is used to adjust the detection unit 1010 When the user's first operation is detected, the head-up display device is adjusted according to the eye position information.
  • the adjustment unit 1040 is configured to adjust the head-up display device according to the eye position information when the detection unit 1010 detects the user's operation of pressing the brake pedal.
  • the adjustment unit 1040 is configured to: when the detection unit 1010 detects the user's operation of adjusting the gear from the parking gear to other gears, adjust the head-up display device according to the eye position information. adjust.
  • the determining unit 1030 is configured to: determine the eye position information based on the image acquired by the acquisition unit 1020 within a first preset time period, wherein the end moment of the first preset time period is the detection of The moment of the first operation.
  • the device 1000 further includes: a control unit, configured to control the head-up display device to display prompt information when the detection unit 1010 detects the user's first operation, and the prompt information is used to prompt the user about the head-up display.
  • the device is adjusting.
  • the acquisition unit 1020 is also configured to stop acquiring the image when the adjustment of the head-up display device is completed.
  • the acquisition unit 1020 is configured to: acquire the image collected by the camera when the detection unit 1010 detects the presence of the user in the first area and detects the user's operation of adjusting the seat in the first area.
  • the adjustment unit 1040 is configured to adjust the head-up display device according to changes in the eye position information when the detection unit 1010 detects that the user adjusts the seat.
  • the device 1000 further includes: a control unit, configured to control the head-up display device to display prompt information when the detection unit 1010 detects that the user adjusts the seat, and the prompt information is used to prompt the user to adjust the seat.
  • the heads-up display adjusts.
  • the adjustment unit 1040 is also configured to stop adjusting the head-up display device when the detection unit 1010 detects that the user stops adjusting the seat.
  • the adjustment unit 1040 is also configured to: within a second preset time period after the detection unit 1010 detects that the user stops adjusting the seat, adjust the head position according to changes in the eye position information. display device to adjust.
  • the adjustment unit 1040 is also configured to stop adjusting the head-up display device at the end of the second preset time period.
  • the determining unit 1030 is configured to: filter interference data in the image; and determine the eye position information based on the image after filtering the interference data.
  • the determining unit 1030 is further configured to: determine the height at which the head-up display device presents a virtual image based on the eye position information; and the adjustment unit 1040 is used to adjust the head-up display device based on the height.
  • the vehicle is a vehicle
  • the first area includes a main driving area or a passenger driving area.
  • the detection unit 1010 may be the computing platform in FIG. 1 or a processing circuit, processor or controller in the computing platform. Taking the detection unit 1010 as the processor 151 in the computing platform as an example, the processor 151 can obtain the pressure value collected by the pressure sensor under the seat in the first area, and thereby determine whether there is a user in the first area based on the change in the pressure value. .
  • the acquisition unit 1020 may be the computing platform in FIG. 1 or a processing circuit, processor or controller in the computing platform. Taking the acquisition unit 1020 as the processor 152 in the computing platform as an example, the processor 152 can control the startup of the camera in the cockpit and acquire the images collected by the camera when the processor 151 determines that the user exists in the first area.
  • the determining unit 1030 may be the computing platform in FIG. 1 or a processing circuit, processor or controller in the computing platform. Taking the determination unit 1030 as the processor 153 in the computing platform as an example, the processor 153 can obtain the image sent by the processor 152 and determine the user's eye position information according to the image segmentation algorithm.
  • the functions implemented by the obtaining unit 1020 and the determining unit 1030 may be implemented by the same processor.
  • the adjustment unit 1040 may be the computing platform in FIG. 1 or a processing circuit, processor or controller in the computing platform. Taking the adjustment unit 1040 as the processor 15n in the computing platform as an example, the processor 15n can adjust the first reflective component 230 and/or the second reflective component 240 as shown in Figure 2 based on the eye position information determined by the processor 153. , thereby adjusting the height of the virtual image presented by the HUD.
  • the functions implemented by the detection unit 1010, the acquisition unit 1020, the determination unit 1030 and the adjustment unit 1040 may be implemented by different processors, or part of the functions may be implemented by the same processor. processor, or all functions can be implemented by the same processor, which is not limited in the embodiments of the present application.
  • each unit in the above device is only a division of logical functions. In actual implementation, it can be fully or partially integrated into a physical entity, or it can also be physically separated.
  • the unit in the device can be implemented in the form of a processor calling software; for example, the device includes a processor, the processor is connected to a memory, instructions are stored in the memory, and the processor calls the instructions stored in the memory to implement any of the above methods. Or realize the functions of each unit of the device, where the processor is, for example, a general-purpose processor, such as a CPU or a microprocessor.
  • the memory is a memory within the device or a memory outside the device.
  • the units in the device can be implemented in the form of hardware circuits, and some or all of the functions of the units can be implemented through the design of the hardware circuits, which can be understood as one or more processors; for example, in one implementation,
  • the hardware circuit is an ASIC, which realizes the functions of some or all of the above units through the design of the logical relationship of the components in the circuit; for another example, in another implementation, the hardware circuit can be implemented through PLD, taking FPGA as an example. It can include a large number of logic gate circuits, and the connection relationships between the logic gate circuits can be configured through configuration files to realize the functions of some or all of the above units. All units of the above device may be fully implemented by the processor calling software, or may be fully implemented by hardware circuits, or part of the units may be implemented by the processor calling software, and the remaining part may be implemented by hardware circuits.
  • the processor is a circuit with signal processing capabilities.
  • the processor may be a circuit with instruction reading and execution capabilities, such as a CPU, a microprocessor, a GPU, or DSP, etc.; in another implementation, the processor can realize certain functions through the logical relationship of the hardware circuit. The logical relationship of the hardware circuit is fixed or can be reconstructed.
  • the processor is a hardware circuit implemented by ASIC or PLD. For example, FPGA.
  • the process of the processor loading the configuration file and realizing the hardware circuit configuration can be understood as the process of the processor loading instructions to realize the functions of some or all of the above units.
  • it can also be a hardware circuit designed for artificial intelligence, which can be understood as an ASIC, such as NPU, TPU, DPU, etc.
  • each unit in the above device can be one or more processors (or processing circuits) configured to implement the above method, such as: CPU, GPU, NPU, TPU, DPU, microprocessor, DSP, ASIC, FPGA , or a combination of at least two of these processor forms.
  • processors or processing circuits
  • each unit in the above device may be integrated together in whole or in part, or may be implemented independently. In one implementation, these units are integrated together and implemented as a SOC.
  • the SOC may include at least one processor for implementing any of the above methods or implementing the functions of each unit of the device.
  • the at least one processor may be of different types, such as a CPU and an FPGA, or a CPU and an artificial intelligence processor. CPU and GPU etc.
  • Embodiments of the present application also provide a device, which includes a processing unit and a storage unit, where the storage unit is used to store instructions, and the processing unit executes the instructions stored in the storage unit, so that the device performs the method performed in the above embodiments or step.
  • the above-mentioned processing unit may be the processor 151-15n shown in Figure 1.
  • FIG. 11 shows a schematic block diagram of an adjustment system 1100 provided by an embodiment of the present application.
  • the adjustment system 1100 includes a head-up display device and a computing platform, where the computing platform may include the above-mentioned adjustment device 1000 .
  • the embodiment of the present application also provides a vehicle, which may include the above-mentioned adjustment device 1000 or adjustment system 1100 .
  • the vehicle may be a vehicle.
  • Embodiments of the present application also provide a computer program product.
  • the computer program product includes: computer program code.
  • the computer program code When the computer program code is run on a computer, it causes the computer to execute the above method.
  • Embodiments of the present application also provide a computer-readable medium.
  • the computer-readable medium stores program code.
  • the computer program code When the computer program code is run on a computer, it causes the computer to perform the above method.
  • each step of the above method can be completed by instructions in the form of hardware integrated logic circuits or software in the processor.
  • the method disclosed in conjunction with the embodiments of the present application can be directly implemented by a hardware processor for execution, or can be executed by a combination of hardware and software modules in the processor.
  • the software module can be located in random access memory, flash memory, read-only memory, programmable read-only memory or power-on erasable programmable memory, registers and other mature storage media in this field.
  • the storage medium is located in the memory, and the processor reads the information in the memory and completes the steps of the above method in combination with its hardware. To avoid repetition, it will not be described in detail here.
  • the memory may include a read-only memory and a random access memory, and provide instructions and data to the processor.
  • the size of the sequence numbers of the above-mentioned processes does not mean the order of execution.
  • the execution order of each process should be determined by its functions and internal logic, and should not be implemented in this application.
  • the implementation of the examples does not constitute any limitations.
  • the disclosed systems, devices and methods can be used in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the functions are implemented in the form of software functional units and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application is essentially or the part that contributes to the existing technology or the part of the technical solution can be embodied in the form of a software product.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which can be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of this application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program code. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Instrument Panels (AREA)

Abstract

本申请实施例提供了一种调节方法、装置和运载工具,该方法可以应用于运载工具的座舱,该座舱内包括第一区域,该座舱内设置有抬头显示装置,该抬头显示装置用于向该第一区域中的用户显示信息,该方法包括:在检测到该第一区域存在用户时,获取该座舱内的摄像头采集的图像;根据该图像,确定用户的眼部位置信息;在检测到该用户的第一操作时,根据该眼部位置信息,对该抬头显示装置进行调节。本申请实施例可以应用于智能汽车或者电动汽车,有助于提升车辆的智能化程度,也有助于提升用户的体验。

Description

一种调节方法、装置和运载工具
本申请要求于2022年9月5日提交中国专利局、申请号为202211078290.2、申请名称为“一种调节方法、装置和运载工具”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及智能座舱领域,并且更具体地,涉及一种调节方法、装置和运载工具。
背景技术
随着车辆的智能化,越来越多的车辆配置了抬头显示装置(head up display,HUD)。当前用户可以通过手动调节的方式调节HUD呈现虚像的高度。这样,会导致用户调节HUD的操作太繁琐,从而导致用户的体验不好。
发明内容
本申请实施例提供一种调节方法、装置和运载工具,有助于提升运载工具的智能化程度,也有助于提升用户的体验。
本申请中的运载工具可以包括路上交通工具、水上交通工具、空中交通工具、工业设备、农业设备、或娱乐设备等。例如运载工具可以为车辆,该车辆为广义概念上的车辆,可以是交通工具(如商用车、乘用车、摩托车、飞行车、火车等),工业车辆(如:叉车、挂车、牵引车等),工程车辆(如挖掘机、推土车、吊车等),农用设备(如割草机、收割机等),游乐设备,玩具车辆等,本申请实施例对车辆的类型不作具体限定。再如,运载工具可以为飞机、或轮船等交通工具。
第一方面,提供了一种调节方法,该方法可以应用于运载工具的座舱,该座舱内包括第一区域,该座舱内设置有抬头显示装置,该抬头显示装置用于向该第一区域中的用户显示信息,该方法包括:在检测到该第一区域存在用户时,获取该座舱内的摄像头采集的图像;根据该图像,确定该用户的眼部位置信息;在检测到该用户的第一操作时,根据该眼部位置信息,对该抬头显示装置进行调节。
本申请实施例中,在检测到座舱内的第一区域存在用户时,可以通过座舱内的摄像头获取图像并通过该图像确定用户的眼部位置信息。在检测到用户的第一操作时,可以根据眼部位置信息对该抬头显示装置进行调节。这样,无需用户手动调节抬头显示装置,降低了用户调节抬头显示装置时的学习成本,也避免了用户调节抬头显示装置过程中繁琐的操作,有助于提升运载工具的智能化程度,也有助于提升用户的体验。
在一些可能的实现方式中,该在检测到该第一区域存在用户时,获取该座舱内的摄像头采集的图像,包括:在检测到该第一区域存在用户时,获取该座舱内的摄像头采集的该用户的图像。
在一些可能的实现方式中,该眼部位置信息可以是一个或者多个眼部位置信息。
例如,该眼部位置信息可以为根据某一个时刻获取的图像确定的一个眼部位置信息,或者,也可以是根据某个时间段内获取的图像确定的一个眼部位置信息,或者,还可以是根据某个时间段内获取的图像确定的多个眼部位置信息。
又例如,在该眼部位置信息为一个眼部位置信息时,可以根据该眼部位置信息对该抬头显示装置进行调节;或者,在该眼部位置信息为多个眼部位置信息时,可以根据该多个眼部位置信息对该抬头显示装置进行调节。
在一些可能的实现方式中,该眼部位置信息可以包括该用户的眼睛位置。
在一些可能的实现方式中,以该运载工具是车辆为例,在检测到该第一区域存在用户时,获取该座舱内的摄像头采集的图像,包括:在检测到该第一区域存在用户,车辆处于驻车档且抬头显示装置开启时,获取该座舱内的摄像头采集的图像。
在一些可能的实现方式中,检测到该第一区域存在用户,包括:检测到第一区域中从不存在用户 到存在用户。
结合第一方面,在第一方面的某些实现方式中,该在检测到该用户的第一操作时,根据该眼部位置信息,对该抬头显示装置进行调节,包括:在检测到该用户踩刹车踏板的操作时,根据该眼部位置信息,对该抬头显示装置进行调节。
由于车辆行驶之前踩下刹车踏板为用户的习惯性操作,本申请实施例中,将抬头显示装置的调节和用户的习惯性操作相结合,在检测到用户踩下刹车踏板时根据用户的眼部位置信息对抬头显示装置进行调节,从而在用户无感状态下完成对该抬头显示装置的调节。这样,降低了用户在调节抬头显示装置过程中的学习成本,也避免了用户在调节抬头显示装置过程中繁琐的操作,有助于提升用户的体验。
结合第一方面,在第一方面的某些实现方式中,该在检测到该用户的第一操作时,根据该眼部位置信息,对该抬头显示装置进行调节,包括:在检测到该用户将档位从驻车档调整至其他档位的操作时,根据该眼部位置信息,对该抬头显示装置进行调节。
由于车辆行驶之前挂档为用户的习惯性操作,本申请实施例中,将抬头显示装置的调节和用户的习惯性操作相结合,在检测到用户挂档的操作时根据用户的眼部位置信息对抬头显示装置进行调节,从而在用户无感状态下完成对该抬头显示装置的调节。这样,降低了用户在调节抬头显示装置过程中的学习成本,也避免了用户在调节抬头显示装置过程中繁琐的操作,有助于提升用户的体验。
在一些可能的实现方式中,检测到该用户踩刹车踏板的操作,包括:检测到刹车踏板的开度大于或者等于预设阈值。
在一些可能的实现方式中,该检测到该用户将档位从驻车档调整至其他档位的操作,包括:检测到该用户将档位从驻车档调整至前进档的操作。
结合第一方面,在第一方面的某些实现方式中,该根据该图像,确定该用户的眼部位置信息,包括:根据第一预设时长内获取的该图像,确定眼部位置信息,其中,该第一预设时长的结束时刻为检测到该第一操作的时刻。
以第一操作是用户踩下刹车踏板的操作为例,考虑到用户在踩下刹车踏板时眼睛可能会注视档位,可以根据检测到用户踩下刹车踏板的操作之前的一段时间内的图像确定的眼部位置信息,进行抬头显示装置调节。这样,可以提升眼部位置信息的准确性,从而有助于提升抬头显示装置调节的准确性,使得抬头显示装置呈现的虚像的高度更符合用户的习惯。
在一些可能的实现方式中,该根据该图像,确定该用户的眼部位置信息,包括:根据检测到该第一操作时获取的图像,确定该眼部位置信息。
在一些可能的实现方式中,该第一操作可以为用户比出预设手势,或者,该第一操作也可以为用户发出语音指令,该语音指令用于指示对抬头显示装置进行调节,或者,该第一操作还可以为用户点击车载显示屏上的某个控件,或者,该第一操作还可以为用户按下方向盘上的某个按键,或者,该第一操作还可以为用户的拨杆操作。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:在检测到该用户的第一操作时,控制该抬头显示装置显示提示信息,该提示信息用于提示该用户该抬头显示装置在进行调节。
本申请实施例中,在检测到用户的第一操作时,还可以控制抬头显示装置显示提示信息。这样,可以使得用户明确获知当前正在对抬头显示装置进行调节,有助于提升用户的体验。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:在完成对该抬头显示装置的调节时,停止获取该图像。
本申请实施例中,在完成对该抬头显示装置的调节时可以停止获取该图像。这样,有助于节省运载工具的计算资源,避免一直对抬头显示装置调节的过程中所造成的资源浪费。
在一些可能的实现方式中,在完成对该抬头显示装置的调节时,停止获取该图像也可以理解为在完成对该抬头显示装置的调节时,停止获取该用户的图像;或者,还可以理解为在完成对该抬头显示装置的调节时,停止对该抬头显示装置进行调节。
结合第一方面,在第一方面的某些实现方式中,该在检测到该第一区域存在用户时,获取该座舱内的摄像头采集的图像,包括:在检测到该第一区域存在该用户且检测到该用户调节该第一区域中座椅的操作时,获取该图像;该在检测到该用户的第一操作时,根据该眼部位置信息,对该抬头显示装置 进行调节,包括:在检测到该用户调节该座椅的过程中,根据该眼部位置信息的变化,对该抬头显示装置进行调节。
本申请实施例中,用户在调节座椅的过程中,运载工具可以根据用户的眼部位置信息的变化,对抬头显示装置进行实时调节。这样,调节座椅的过程与调节抬头显示装置的过程相结合,用户可以在对座椅调节的过程中完成对抬头显示装置的调节,避免了用户调节抬头显示装置过程中繁琐的操作,有助于提升运载工具的智能化程度,也有助于提升用户的体验。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:在检测到该用户调节该座椅的过程中,控制该抬头显示装置显示提示信息,该提示信息用于提示该用户该抬头显示装置在进行调节。
本申请实施例中,在检测到用户调节座椅的过程中,可以控制抬头显示装置显示提示信息。这样,可以使得用户明确获知当前正在对抬头显示装置进行调节以及不同座椅状态下对应的抬头显示装置呈现的虚像的高度,方便用户在调节座椅的过程中完成对抬头显示装置的调节,有助于提升用户的体验。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:在检测到该用户停止调节该座椅的操作时,停止对该抬头显示装置的调节。
本申请实施例中,在完成对该座椅的调节时可以停止对该抬头显示装置的调节。这样,有助于节省运载工具的计算资源,避免一直对抬头显示装置调节的过程中所造成的资源浪费。
在一些可能的实现方式中,该方法还包括:在检测到该用户停止调节该座椅的操作时,停止获取该图像。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:在检测到该用户停止调节该座椅的操作起的第二预设时长内,根据该眼部位置信息的变化,对该抬头显示装置进行调节。
本申请实施例中,用户在完成座椅的调节后可以给用户预留一段时间继续调节用户的坐姿,在用户调节坐姿的过程中还可以继续根据用户的眼部位置信息的变化对抬头显示装置进行实时调节。这样,在座椅的姿态和身体的姿态均满足用户的习惯时,完成对抬头显示装置的调节,有助于提升用户的体验。
结合第一方面,在第一方面的某些实现方式中,该方法还包括:在该第二预设时长结束时,停止对该抬头显示装置的调节。
本申请实施例中,在第二预设时长结束时可以停止对该抬头显示装置的调节。这样,有助于节省运载工具的计算资源,避免一直对抬头显示装置调节的过程中所造成的资源浪费。
结合第一方面,在第一方面的某些实现方式中,该根据该图像,确定该用户的眼部位置信息,包括:过滤该图像中的干扰数据;根据过滤该干扰数据后的该图像,确定该眼部位置信息。
本申请实施例中,在确定用户的眼部位置信息时可以过滤掉图像中的干扰数据,这样有助于提升眼部位置信息的准确性,从而有助于提升对抬头显示装置调节的准确定。
结合第一方面,在第一方面的某些实现方式中,该根据该眼部位置信息,对该抬头显示装置进行调节,包括:根据该眼部位置信息,确定该抬头显示装置呈现虚像的高度;根据该高度,对该抬头显示装置进行调节。
在一些可能的实现方式中,确定该抬头显示装置呈现虚像的高度也可以理解为确定抬头显示装置的投影高度。
结合第一方面,在第一方面的某些实现方式中,该运载工具为车辆,该第一区域包括主驾区域或者副驾区域。
第二方面,提供了一种调节装置,该装置用于调节运载工具座舱内的抬头显示装置,该座舱内包括第一区域,该抬头显示装置用于向该第一区域中的用户显示信息,该装置包括:检测单元,用于检测该第一区域是否存在用户;获取单元,用于在该检测单元检测到该第一区域存在用户时,获取该座舱内的摄像头采集的图像;确定单元,用于根据该图像,确定该用户的眼部位置信息;调节单元,用于在该检测单元检测到该用户的第一操作时,根据该眼部位置信息,对该抬头显示装置进行调节。
结合第二方面,在第二方面的某些实现方式中,该调节单元,用于:在该检测单元检测到该用户踩刹车踏板的操作时,根据该眼部位置信息,对该抬头显示装置进行调节。
结合第二方面,在第二方面的某些实现方式中,该调节单元,用于:在该检测单元检测到该用户将档位从驻车档调整至其他档位的操作时,根据该眼部位置信息,对该抬头显示装置进行调节。
结合第二方面,在第二方面的某些实现方式中,该确定单元,用于:根据第一预设时长内获取的该图像,确定该眼部位置信息,其中,该第一预设时长的结束时刻为检测到该第一操作的时刻。
结合第二方面,在第二方面的某些实现方式中,该装置还包括:控制单元,用于在该检测单元检测到该用户的第一操作时,控制该抬头显示装置显示提示信息,该提示信息用于提示该用户该抬头显示装置在进行调节。
结合第二方面,在第二方面的某些实现方式中,该获取单元,还用于:在完成对该抬头显示装置的调节时,停止获取该图像。
结合第二方面,在第二方面的某些实现方式中,该获取单元,用于:在该检测单元检测到该第一区域存在该用户且检测到该用户调节该第一区域中座椅的操作时,获取该摄像头采集的该图像;该调节单元,用于:在该检测单元检测到该用户调节该座椅的过程中,根据该眼部位置信息的变化,对该抬头显示装置进行调节。
结合第二方面,在第二方面的某些实现方式中,该装置还包括:控制单元,用于在该检测单元检测到该用户调节该座椅的过程中,控制该抬头显示装置显示提示信息,该提示信息用于提示该用户该抬头显示装置在进行调节。
结合第二方面,在第二方面的某些实现方式中,该调节单元,还用于:在该检测单元检测到该用户停止调节该座椅的操作时,停止对该抬头显示装置的调节。
结合第二方面,在第二方面的某些实现方式中,该调节单元,还用于:在该检测单元检测到该用户停止调节该座椅的操作起的第二预设时长内,根据该眼部位置信息的变化,对该抬头显示装置进行调节。
结合第二方面,在第二方面的某些实现方式中,该调节单元,还用于:在该第二预设时长结束时,停止对该抬头显示装置的调节。
结合第二方面,在第二方面的某些实现方式中,该确定单元,用于:过滤该图像中的干扰数据;根据过滤该干扰数据后的该图像,确定该眼部位置信息。
结合第二方面,在第二方面的某些实现方式中,该确定单元,还用于:根据该眼部位置信息,确定该抬头显示装置呈现虚像的高度;该调节单元,用于:根据该高度,对该抬头显示装置进行调节。
结合第二方面,在第二方面的某些实现方式中,该运载工具为车辆,该第一区域包括主驾区域或者副驾区域。
第三方面,提供了一种调节装置,该装置包括处理单元和存储单元,其中存储单元用于存储指令,处理单元执行存储单元所存储的指令,以使该装置执行第一方面中任一种可能的方法。
第四方面,提供了一种调节系统,该系统包括抬头显示装置和计算平台,其中,该计算平台包括第二方面或者第三方面中任一种可能的装置。
第五方面,提供了一种运载工具,该运载工具包括第二方面或者第三方面中任一种可能的装置,或者,包括第四方面所述的调节系统。
在一些可能的实现方式中,该运载工具为车辆。
第六方面,提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述第一方面中任一种可能的方法。
需要说明的是,上述计算机程序代码可以全部或者部分存储在第一存储介质上,其中第一存储介质可以与处理器封装在一起的,也可以与处理器单独封装,本申请实施例对此不作具体限定。
第七方面,提供了一种计算机可读介质,所述计算机可读介质存储有程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述第一方面中任一种可能的方法。
第八方面,本申请实施例提供了一种芯片系统,该芯片系统包括处理器,用于调用存储器中存储的计算机程序或计算机指令,以使得该处理器执行上述第一方面或者第二方面中任一种可能的方法。
结合第八方面,在一种可能的实现方式中,该处理器通过接口与存储器耦合。
结合第八方面,在一种可能的实现方式中,该芯片系统还包括存储器,该存储器中存储有计算机程序或计算机指令。
本申请实施例提供一种调节方法、装置和运载工具,在检测到用户的第一操作时,可以根据眼部位置信息对该抬头显示装置进行调节,降低了用户调节抬头显示装置时的学习成本,也避免了用户调 节抬头显示装置过程中繁琐的操作,有助于提升运载工具的智能化程度,也有助于提升用户的体验。通过将抬头显示装置的调节和用户的习惯性操作相结合,在检测到用户踩下刹车踏板时根据用户的眼部位置信息对抬头显示装置进行调节,从而在用户无感状态下完成对该抬头显示装置的调节。考虑到用户在踩下刹车踏板时眼睛可能会注视档位,可以根据检测到用户踩下刹车踏板的操作之前的一段时间内的图像确定的眼部位置信息,进行抬头显示装置调节,可以提升用户的眼部位置信息的准确性,从而有助于提升抬头显示装置调节的准确性,使得抬头显示装置呈现的虚像的高度更符合用户的习惯。通过显示提示信息,可以使得用户明确获知当前正在对抬头显示装置进行调节,有助于提升用户的体验。在完成对该抬头显示装置的调节时可以停止获取图像,有助于节省运载工具的计算资源,避免一直对抬头显示装置调节的过程中所造成的资源浪费。
在调节座椅的过程中,可以根据用户的眼部位置信息的变化,对抬头显示装置进行实时调节。这样,调节座椅的过程与调节抬头显示装置的过程相结合,用户可以在对座椅调节的过程中完成对抬头显示装置的调节,避免用户调节抬头显示装置过程中繁琐的操作,有助于提升运载工具的智能化程度,也有助于提升用户的体验。在完成座椅的调节后可以给用户预留一段时间继续调节用户的坐姿,在用户调节坐姿的过程中还可以根据用户的眼部位置信息的变化对抬头显示装置进行实时调节。这样,在座椅的姿态和身体的姿态均满足用户的习惯时,完成对抬头显示装置的调节,有助于提升用户的体验。在第二预设时长结束时可以停止对该抬头显示装置的调节。这样,有助于节省运载工具的计算资源,避免一直对抬头显示装置调节的过程中所造成的资源浪费。
附图说明
图1是本申请实施例提供的车辆的一个功能框图示意。
图2是本申请实施例提供的一种风挡型抬头显示W-HUD系统的结构示意图。
图3是本申请实施例提供的一组图形用户界面GUI。
图4是本申请实施例提供的另一组GUI。
图5是本申请实施例提供的另一组GUI。
图6是本申请实施例提供的另一组GUI。
图7是本申请实施例提供的调节方法的示意性流程图。
图8是本申请实施例提供的调节方法的另一示意性流程图。
图9是本申请实施例提供的调节方法的另一示意性流程图。
图10是本申请实施例提供的调节装置的示意性框图。
图11是本申请实施例提供的调节系统的示意性框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例中采用诸如“第一”、“第二”的前缀词,仅仅为了区分不同的描述对象,对被描述对象的位置、顺序、优先级、数量或内容等没有限定作用。本申请实施例中对序数词等用于区分描述对象的前缀词的使用不对所描述对象构成限制,对所描述对象的陈述参见权利要求或实施例中上下文的描述,不应因为使用这种前缀词而构成多余的限制。此外,在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
图1是本申请实施例提供的车辆100的一个功能框图示意。车辆100可以包括感知系统120、显示装置130和计算平台150,其中,感知系统120可以包括感测关于车辆100周边的环境的信息的一种或多种传感器。例如,感知系统120可以包括定位系统,定位系统可以是全球定位系统(global positioning system,GPS),也可以是北斗系统或者其他定位系统。感知系统120还可以包括惯性测量单元(inertial measurement unit,IMU)、激光雷达、毫米波雷达、超声雷达以及摄像装置中的一种或者多种。
车辆100的部分或所有功能可以由计算平台150控制。计算平台150可包括一个或多个处理器,例如处理器151至15n(n为正整数),处理器是一种具有信号的处理能力的电路,在一种实现中,处理器可以是具有指令读取与运行能力的电路,例如中央处理单元(central processing unit,CPU)、微处理器、图形处理器(graphics processing unit,GPU)(可以理解为一种微处理器)、或数字信号处理器(digital signal processor,DSP)等;在另一种实现中,处理器可以通过硬件电路的逻辑关系实现一定功能,该硬件电路的逻辑关系是固定的或可以重构的,例如处理器为专用集成电路(application-specific integrated circuit,ASIC)或可编程逻辑器件(programmable logic device,PLD)实现的硬件电路,例如现场可编程门阵列(field programmable gate array,FPGA)。在可重构的硬件电路中,处理器加载配置文档,实现硬件电路配置的过程,可以理解为处理器加载指令,以实现以上部分或全部单元的功能的过程。此外,处理器还可以是针对人工智能设计的硬件电路,其可以理解为一种ASIC,例如神经网络处理单元(neural network processing unit,NPU)、张量处理单元(tensor processing unit,TPU)、深度学习处理单元(deep learning processing unit,DPU)等。此外,计算平台150还可以包括存储器,存储器用于存储指令,处理器151至15n中的部分或全部处理器可以调用存储器中的指令,执行指令,以实现相应的功能。
座舱内的显示装置130主要分为两类,第一类是车载显示屏;第二类是投影显示屏,例如HUD。车载显示屏是一种物理显示屏,是车载信息娱乐系统的重要组成部分,座舱内可以设置有多块显示屏,如数字仪表显示屏,中控屏,副驾驶位上的乘客(也称为前排乘客)面前的显示屏,左侧后排乘客面前的显示屏以及右侧后排乘客面前的显示屏,甚至是车窗也可以作为显示屏进行显示。抬头显示,也称平视显示系统。主要用于在驾驶员前方的显示设备(例如挡风玻璃)上显示例如时速、导航等驾驶信息。以降低驾驶员视线转移时间,避免因驾驶员视线转移而导致的瞳孔变化,提升行驶安全性和舒适性。HUD例如包括组合型抬头显示(combiner-HUD,C-HUD)系统、风挡型抬头显示(windshield-HUD,W-HUD)系统、增强现实型抬头显示系统(augmented reality HUD,AR-HUD)。应理解,HUD也可以随着技术演进出现其他类型的系统,本申请对此不作限定。
图2示出了本申请实施例提供的一种W-HUD系统的结构示意图。如图2所示,该系统包括HUD本体210、影像生成模块(picture generating unit,PGU)220、第一反射组件230(例如,平面镜或者曲面镜)、第二反射组件240(例如,平面镜或者曲面镜)、风挡玻璃250。W-HUD系统的光学成像原理为:PGU产生的影像经过第一反射组件230、第二反射组件240以及风挡玻璃250反射至人眼处,因为光学系统有增加光程与放大效果,用户可以在座舱外看到有景深的虚像260。目前W-HUD系统广泛应用于车载辅助驾驶,例如,车速、警示灯号、导航、高级驾驶辅助系统(advanced driving assistant system,ADAS)。
由于不同的用户坐姿和身高的不同,其眼睛的高度也可能不同。例如用户A上车后,可以通过按键或旋钮调节HUD呈现的虚像的高度(例如,高度1),从而使得虚像的高度1与用户A的眼睛的高度相匹配。当用户B上车后,由于用户A和用户B的坐姿和身高不同,那么虚像的高度1并不是与用户B的眼睛高度相匹配的高度,从而导致用户B看到虚像扭曲或者无法看到虚像。此时,用户B也需要通过按键或旋钮调节HUD呈现的虚像位置的高度(例如,高度2),从而使得虚像的高度2与用户B的眼睛的高度相匹配。对于普通消费者特别是初次使用HUD的消费者来说,需要一个学习具体使用方式的过程。这样,会增加用户的学习成本,也会导致用户调节HUD的过程比较繁琐,从而导致用户的体验不好。
本申请实施例提供了一种调节方法、装置和运载工具,通过座舱内摄像头采集的图像确定用户的眼部位置信息,从而根据该眼部位置信息自动对HUD进行调节。这样,无需用户手动调节HUD,有助于提升运载工具的智能化程度,也有助于提升用户的体验。
以上眼部位置信息可以包括眼睛位置、眼睛所在的区域的位置或者视线方向等眼睛位置信息的一种或多种。例如,用户的眼睛所在的区域可以为矩形区域、椭圆形区域或者圆形区域等。对于通过眼睛位置信息来调节HUD的投影位置可以提高HUD调节的效率和准确度,提高用户体验。
以上对HUD进行调节可以包括对HUD呈现的虚像的高度进行调节,或者,对HUD投影的高度进行调节,或者,对HUD的投影位置进行调节。
示例性的,以眼部位置信息是眼睛位置为例,图3示出了本申请实施例提供的一组图形用户界面 (graphical user interface,GUI)。
如图3中的(a)所示,在检测到用户坐到主驾区域中的座椅时,车辆可以通过座舱内的摄像头(例如,驾驶员监测系统(driver monitor system,DMS)或者座舱监测系统(cabin monitor system,CMS)的摄像头)采集图像,该图像中包括用户的人脸信息。车辆可以根据人脸信息中眼睛位置,确定HUD呈现的虚像的高度。在检测到用户踩下刹车踏板的操作时,可以显示如图3中的(b)所示的GUI。
一个实施例中,检测到用户坐到主驾区域中的座椅也可以理解为检测到主驾区域中存在用户。
一个实施例中,通过座舱内的摄像头采集图像,包括:通过座舱内的摄像头采集该用户的图像,或者,通过座舱内的摄像头采集主驾区域的图像。
一个实施例中,在检测到用户坐到主驾区域中的座椅时,车辆可以通过座舱内的摄像头采集图像,包括:在检测到用户坐到主驾区域中的座椅,车辆当前处于驻车档(P档)且HUD功能启动时,通过座舱内的摄像头采集图像。
如图3中的(b)所示,在检测到用户踩下刹车踏板的操作时,车辆可以根据眼睛位置对应的HUD呈现的虚像的高度,对HUD进行调节。同时,车辆可以通过HUD显示调节框301,其中,调节框301中包括提示信息“正在自动匹配高度”。
示例性的,以图2所示的W-HUD系统的结构示意图为例,对HUD进行调节可以包括对第一反射组件230和/或第二反射组件240进行调节。例如,可以对第一反射组件230和/或第二反射组件240的反射角度进行调节,从而实现对HUD呈现的虚像的高度的调节。
由于车辆行驶之前踩下刹车踏板和挂档为用户的习惯性操作,本申请实施例中,将HUD的调节和用户的习惯性操作相结合,在检测到用户踩下刹车踏板时根据用户的眼睛位置对HUD进行调节,从而在用户无感状态下完成对HUD的调节。这样,降低了用户在调节HUD过程中的学习成本,也避免了用户在调节HUD过程中繁琐的操作,有助于提升用户的体验。
一个实施例中,车辆可以根据人脸信息中眼睛位置,确定HUD呈现的虚像的高度,包括:根据第一预设时长内采集的图像中眼睛位置,确定HUD呈现的虚像的高度,该第一预设时长为检测到用户踩下刹车踏板的操作之前的预设时长。
一个实施例中,该第一预设时长的结束时刻为检测到用户踩下刹车踏板的操作的时刻。或者,该第一预设时长的结束时刻为检测到刹车踏板的开度大于或者等于预设开度阈值的时刻。
示例性的,该第一预设时长为500毫秒(millisecond,ms)。
本申请实施例中,考虑到用户在踩下刹车踏板时眼睛可能会注视档位,可以根据检测到用户踩下刹车踏板的操作之前的一段时间内的图像确定的眼睛位置,对HUD进行调节。这样,可以提升用户的眼睛位置的准确性,从而有助于提升HUD调节的准确性,使得HUD呈现的虚像的高度更符合用户的习惯。
一个实施例中,车辆还可以是在检测到用户将档位从P档调整至其他档位的操作时,根据眼睛位置对应的HUD呈现的虚像的高度,对HUD进行调节。
一个实施例中,检测到用户将档位从P档调整至其他档位的操作,包括:检测到用户将档位从P档调整至前进档(D档)的操作。
一个实施例中,车辆还可以是在检测到用户的其他操作(例如,拨杆操作,在车载显示屏上点击确认的操作或者语音指令等)时,根据眼睛位置对应的HUD呈现的虚像的高度,对HUD进行调节。
如图3中的(c)所示,在完成对HUD的调节后,调节框301中可以显示提示信息“高度已匹配”。
如图3中的(d)所示,在通过调节框301显示提示信息“高度已匹配”后,车辆可以通过HUD在该高度处显示导航信息,例如,当前的车速为0公里/小时(Km/h),向前行驶800米后右转。
一个实施例中,在检测到用户将档位从P档调整为其他档位的操作时,车辆可以停止通过座舱内的摄像头获取图像,进而停止根据用户的眼睛位置确定HUD呈现的虚像的高度。这样,可以避免一直根据用户眼睛位置进行HUD调节所造成的资源浪费。
一个实施例中,在检测到用户踩下刹车踏板的操作时,车辆可以根据眼睛位置对应的HUD呈现的虚像的高度,对HUD进行调节。在根据眼睛的高度,完成对HUD的调节时,可以停止对HUD的调节。
本申请实施例中,用户在上车后,车辆可以在用户无感的状态下将HUD呈现的虚像的高度自动调 节至符合用户习惯的高度(或者,用户的最佳视野范围内)。这样,无需用户手动调节HUD呈现的虚像的高度,有助于提升车辆的自动化程度,也有助于提升用户的体验。
示例性的,图4示出了本申请实施例提供的一组图形用户界面GUI。与图3所示的GUI不同的是,在检测到用户踩下刹车踏板的操作时,车辆可以控制HUD在调节框301中显示待显示图像(例如,导航信息、仪表信息等)。在完成对HUD的调节时,可以自动隐藏该调节框301。
如图4中的(a)所示,在检测到用户踩下刹车踏板的操作时,车辆可以根据HUD呈现的虚像的高度,对HUD进行调节。同时,车辆可以通过HUD显示调节框301,其中,调节框301中包括导航信息。
如图4中的(b)所示,在完成对HUD的调节后,车辆可以控制HUD隐藏该调节框301。在调节框301消失时,用户就可以确定车辆已经完成了对HUD呈现的虚像的高度的调节。
示例性的,图5示出了本申请实施例提供的一组图形用户界面GUI。
如图5中的(a)所示,在检测到用户坐到主驾区域中的座椅且未检测到用户调节座椅的操作时,车辆可以不通过座舱内的摄像头采集图像。
如图5中的(b)所示,在检测到用户调节座椅的操作(例如,将座椅向前调节)时,车辆可以通过座舱内的摄像头采集的图像,确定主驾区域中用户的眼睛位置。在用户调节座椅的过程中,车辆还可以根据眼睛位置的变化,对HUD呈现的虚像的高度进行实时调节。同时,车辆还可以控制HUD显示提示框301,该提示框301中包括提示信息“正在自动匹配高度”。
一个实施例中,在检测到用户调节座椅的操作时,车辆可以通过座舱内的摄像头采集图像,包括:在检测到用户对主驾区域的座椅进行调节,车辆当前处于P档且HUD功能启动时,车辆可以通过座舱内的摄像头采集图像。
例如,在用户调节座椅过程中的T1时刻,通过摄像头采集的图像确定眼睛位置在位置1,与位置1匹配的HUD呈现的虚像的高度为高度1。车辆可以根据高度1,对HUD进行调节。
一个实施例中,用户调节座椅的操作包括调节座椅的前后位置、调节座椅的高度或者调节座椅靠背角度等。
如图5中的(c)所示,在检测到用户持续调节座椅的过程(例如,进一步将座椅向前调节)中,车辆可以继续通过摄像头采集的图像,获取主驾区域中用户的眼睛位置。同时,车辆还可以控制HUD显示提示框301,该提示框301中包括提示信息“正在自动匹配高度”。
例如,在用户调节座椅过程中的T2时刻,通过摄像头采集的图像确定眼睛位置在位置2,与位置2匹配的HUD呈现的虚像的高度为高度2。车辆可以根据高度2,对HUD进行调节。
如图5中的(d)所示,在检测到用户停止调节座椅的操作时,车辆可以停止对HUD进行调节。此时,可以通过HUD显示调节框且在调节框中显示提示信息“高度已匹配”。从而用户可以获知车辆已经完成对HUD的调节。
一个实施例中,在检测到用户停止调节座椅的操作起的预设时长时,车辆可以继续根据用户的眼睛位置的变化,对HUD进行调节。在检测到用户停止调节座椅的操作起的预设时长内未检测到用户再一次调节座椅的操作,那么在该预设时长结束时,车辆可以停止对HUD进行调节。示例性的,该预设时长为1秒(second,s)。
例如,由于调节座椅的前后位置和高度的按键并不相同,这样会到导致用户在调节座椅的前后位置和高度之间可能会存在一定的时间间隔。在检测到用户停止调节座椅起的预设时长内还可以继续对HUD进行调节。这样,在调节座椅的前后位置和调节座椅的高度之间存在间隔的情况下,可以不间断对HUD的调节,让用户在两次有间隔的座椅调节过程中体验到不间断的HUD调节过程,有助于提升用户的体验。
以上车辆停止对HUD进行调节也可以理解为车辆停止通过摄像头获取座舱内的图像,或者车辆停止确定主驾区域中用户的眼睛位置。
如图5中的(e)所示,在通过调节框301显示提示信息“高度已匹配”后,车辆可以通过HUD显示导航信息,例如,当前的车速为0公里/小时(Km/h),向前行驶800米后右转。
本申请实施例中,用户在调节座椅的过程中,车辆可以根据用户的眼睛位置的变化,对HUD呈现的虚像的高度进行实时调节。在用户感觉HUD呈现的虚像的高度已经满足用户的习惯时,可以停止对 座椅的调节,从而车辆可以停止对HUD的调节。这样,调节座椅的过程与调节HUD的过程相结合,用户可以自己决定HUD呈现的虚像的高度,有助于提升用户的体验。
以上图5中是以用户调节座椅的过程中,根据眼睛位置的变化,对HUD进行实时调节为例进行说明的,本申请实施例并不限于此。例如,在检测到用户坐上主驾区域的座椅并且踩下刹车踏板起的预设时长内,车辆可以根据用户眼睛位置的变化,对HUD进行实时调节。例如,在检测到用户踩下刹车踏板的操作时,车辆可以控制HUD显示倒计时信息(例如,倒计时5秒)。在倒计时的过程中,用户可以对自己的坐姿和/或座椅的坐姿进行调整。在倒计时结束时,车辆可以根据倒计时结束时用户的眼睛位置,将HUD呈现的虚像的高度调整至与该眼睛位置相匹配的高度。在倒计时结束后,用户还可以正常通过踩刹车、挂档等操作控制车辆行驶。这样,将HUD的调节和用户的习惯性操作相结合,降低了用户在调节HUD过程中的学习成本,也避免了用户在调节HUD过程中繁琐的操作,有助于提升用户的体验。
以上图5中通过检测到用户调节座椅的操作,开始对HUD进行实时调节,本申请实施例并不限于此。
例如,也可以在检测到用户打开车门的操作时,开始根据用户的眼睛位置对HUD进行实时调节。在通过摄像头采集的图像检测到用户的预设手势(例如,OK手势)时,停止对HUD进行调节。
又例如,还可以在检测到用户坐上座椅时,开始根据用户的眼睛位置对HUD进行实时调节。在通过摄像头采集的图像检测到用户的预设手势(例如,OK手势)时,停止对HUD进行调节。
以上图5所示的GUI中,在检测到用户停止调节座椅的操作时,车辆可以停止对HUD进行调节。本申请实施例并不限于此。例如,考虑到停止调节座椅时用户的坐姿还不符合用户的习惯,还可以预留时间让用户调整坐姿。在用户调整坐姿的过程中,车辆还可以继续对HUD进行实时调节。
示例性的,图6示出了本申请实施例提供的一组图形用户界面GUI。
如图6中的(a)所示,在检测到用户停止调节座椅的操作时,车辆可以通过HUD显示提示框301、提示信息“倒计时结束后停止调节”以及倒计时的信息。
示例性的,在检测到用户停止调节座椅的操作时,车辆可以预留给用户3s的时间进行坐姿的调整。在这3秒中,车辆还可以根据摄像头采集的图像确定用户的眼睛位置的变化,从而根据该眼睛位置的变化,对HUD呈现虚像的高度进行实时调节。
如图6中的(b)所示,在3s倒计时结束时,车辆可以停止对HUD进行调节。同时,车辆可以通过HUD显示提示框301以及提示信息“高度已匹配”。
本申请实施例中,用户在完成座椅的调节后可以给用户预留一段时间继续调节用户的坐姿,在用户调节坐姿的过程中车辆还可以根据用户的眼睛位置的变化对HUD进行实时调节。在预留时间结束时,车辆可以停止对HUD进行调节。这样,在座椅的姿态和身体的姿态均符合用户的习惯时,完成对HUD的调节,有助于提升用户的体验。
以上图6所示的GUI是以倒计时结束时刻作为停止调节HUD的时刻,本申请实施例并不限于此。例如,还可以是在检测到用户踩刹车踏板的操作时,停止对HUD进行调节。又例如,还可以在检测到用户将档位从P档调节至其他档位的操作时,停止对HUD进行调节。
一个实施例中,上述倒计时的长度可以是车辆出厂时就设置好的,或者,也可以是用户设置的。
图7示出了本申请实施例提供的调节方法700的示意性流程图。如图7所示,该方法包括:
S701,计算平台获取第一区域的座椅传感器采集的数据。
示例性的,该第一区域可以为主驾区域或者副驾区域。
一个实施例中,座椅传感器采集的数据用于指示用户是否坐上座椅,或者,座椅传感器采集的数据用于判断第一区域内是否存在用户。
一个实施例中,该座椅传感器可以为主驾区域的座椅传感器或者副驾区域的座椅传感器。
示例性的,该座椅传感器可以为座椅下的压力传感器或者对安全带状态进行检测的传感器。例如,当某个时间段内,计算平台可以根据主驾区域中座椅下的压力传感器检测的压力值的变化,确定用户坐上主驾区域的座椅。
S702,计算平台获取档位传感器采集的数据。
一个实施例中,档位传感器采集的数据用于指示当前车辆的档位,例如,当前车辆处于P档。
S703,计算平台获取HUD是否开启的信息。
示例性的,在用户启动HUD时,HUD可以向计算平台发送HUD已经启动的信息。
以上S701-S703之间并没有实际的先后顺序。
S704,在检测到座椅上有用户,车辆处于P档且HUD启动时,计算平台可以获取座舱内的摄像头采集图像。
示例性的,在检测到主驾区域的座椅上有用户,车辆处于P档且HUD启动时,计算平台可以控制座舱内的DMS采集图像。例如,计算平台可以控制DMS采集主驾区域的图像,或者,控制DMS采集该用户的图像。
S705,计算平台获取该图像中的人脸信息。
示例性的,计算平台可以根据图像分割算法获取该图像中的人脸信息。
S706,计算平台根据该人脸信息过滤干扰数据。
示例性的,摄像头采集的图像中可能会存在一些干扰数据,例如,在用户低头看刹车、换鞋、看左侧或者右侧的后视镜或者抬头看车内后视镜时采集的图像对于调节HUD来说都是干扰数据,可以进行过滤。
S707,计算平台根据过滤干扰数据后的人脸信息,确定用户的眼部位置信息。
示例性的,计算平台可以根据预设时长(例如,3秒)内获取的有效数据(或者,人脸信息中除干扰数据以外的数据),确定用户的眼部位置信息。通过转化矩阵可以将该眼部位置信息转换到与HUD同一坐标系下,从而可以确定眼部位置信息对应的HUD的档位信息。
示例性的,该眼部位置信息包括眼睛位置、眼睛所在的区域的位置或者视线方向等。
应理解,该HUD的档位信息可以为HUD呈现的虚像的高度。
S708,计算平台根据该眼部位置信息,确定HUD的档位信息。
一个实施例中,计算平台可以根据该眼部位置信息以及眼部位置信息与HUD的档位信息的映射关系,确定HUD的档位信息。
示例性的,该眼部位置信息可以为眼睛位置,计算平台可以根据该眼睛位置以及眼睛位置与HUD的档位信息的映射关系,确定HUD的档位信息。
S709,计算平台获取刹车踏板传感器采集的数据。
一个实施例中,该刹车踏板传感器采集的数据用于指示刹车踏板的开度。
S710,在刹车踏板的开度大于或者等于预设阈值时,计算平台可以根据该HUD的档位信息对HUD进行调节。
一个实施例中,从检测到用户坐上座椅到检测到刹车踏板的开度大于或者等于预设阈值的时长为△t。在△t内,计算平台可以根据人脸信息中的有效数据,确定用户的眼部位置信息。例如,在△t内,通过人脸信息中的有效数据获取到N个眼部位置信息。对该N个眼部位置信息进行加权平均后就可以获得最终得到的用户的眼部位置信息。从而计算平台可以根据该最终得到的眼部位置信息,对HUD进行调节。
一个实施例中,检测到刹车踏板的开度大于或者等于预设阈值的时刻或者检测到用户踩下刹车踏板的时刻为t1时刻,那么计算平台可以获取t1时刻之前的预设时长(例如,500ms)内的人脸信息,并根据该预设时长内的人脸信息确定最终用户的眼部位置信息。例如,在t1时刻之前的500ms内的人脸信息中未包括干扰数据,那么计算平台可以根据这500ms内的人脸信息,确定用户的眼部位置信息。
一个实施例中,该预设时长的结束时刻为该t1时刻;或者,该预设时长的结束时刻为该t2时刻,其中,t2时刻为t1时刻之前的一个时刻。
一个实施例中,在刹车踏板的开度大于或者等于预设阈值时,计算平台还可以控制HUD显示提示信息,该提示信息用于提示用户正在对HUD进行调节。
一个实施例中,在计算平台完成对该HUD的调节时,可以自动隐藏该提示信息。或者,在计算平台完成对该HUD的调节时,可以控制HUD显示待显示的图像,例如,导航信息或者仪表信息。
本申请实施例中,用户在上车后,车辆可以在用户无感的状态下将HUD呈现的图像的高度自动调节至符合用户的习惯的高度。这样,无需用户手动调节HUD呈现的图像的高度,有助于提升车辆的自动化程度,也有助于提升用户的体验。
图8示出了本申请实施例提供的调节方法800的示意性流程图。如图8所示,该方法包括:
S801,计算平台获取座椅状态的变化。
一个实施例中,该座椅状态的变化用于指示座椅高度的变化、座椅靠背角度的变化或者座椅前后位置的变化。
S802,计算平台获取档位传感器采集的数据。
S803,计算平台获取HUD是否开启的信息。
以上S802-S803可以参考上述S702-S703的描述,此处不再赘述。
S804,在检测到座椅状态发生变化,车辆处于P档且HUD启动时,计算平台可以获取座舱内的摄像头采集的图像。
一个实施例中,检测到座椅状态发生变化包括:检测到座椅高度、座椅靠背的角度或者座椅前后位置发生变化中的一个或者多个。
S805,计算平台获取该图像中的人脸信息。
S806,计算平台根据该人脸信息过滤干扰数据。
S807,计算平台根据过滤干扰数据后的人脸信息,确定用户的眼部位置信息。
S808,计算平台根据该眼部位置信息,确定HUD的档位信息。
以上S805-S808的过程可以参考上述S705-S708的过程,此处不再赘述。
S809,计算平台根据该HUD的档位信息,对HUD进行调节。
在检测到用户调节座椅的过程中,计算平台可以重复执行上述S804-S809,从而可以在不同座椅状态下,根据眼部位置信息的变化,对HUD进行实时调节。
一个实施例中,在检测到座椅状态发生变化,车辆处于P档且HUD启动时,计算平台可以周期性得获取摄像头采集的图像。
例如,在第一个周期中,计算平台可以根据该周期内获取的图像中人脸信息,确定用户的眼睛位置为位置1,从而计算平台可以根据该位置1将HUD呈现的虚像的高度调节至高度1,其中,位置1与高度1之间具有对应关系。
又例如,在第二个周期中,计算平台可以根据该周期内获取的图像中人脸信息,确定用户的眼睛位置为位置2,从而计算平台可以根据该位置2将HUD呈现的虚像的高度调节至高度2,其中,位置2与高度2之间具有对应关系。以此类推,计算平台可以在不同的周期内对HUD进行实时调节。
一个实施例中,在对HUD进行实时调节的过程中,计算平台还可以控制HUD显示提示信息,该提示信息用于提示用户正在对HUD进行调节。
S810,在检测到用户停止对座椅进行调节时,计算平台可以停止对HUD进行调节。
一个实施例中,在检测到用户停止对座椅进行调节起的预设时长内,计算平台还可以继续根据眼部位置信息的变化,对HUD进行实时调节。在该预设时长结束时,计算平台可以停止对HUD进行调节。
图9示出了本申请实施例提供的调节方法900的示意性流程图。该方法900可以由运载工具(例如,车辆)执行,或者,该方法900可以由上述计算平台执行,或者,该方法900可以由计算平台和抬头显示装置组成的系统执行,或者,该方法900可以由上述计算平台中的片上系统(system-on-a-chip,SOC)执行,或者,该方法900可以由计算平台中的处理器执行。该方法900可以应用于运载工具的座舱,该座舱内包括第一区域,该座舱内设置有抬头显示装置,该抬头显示装置用于向该第一区域中的用户显示信息,该方法900包括:
S910,在检测到该第一区域存在用户时,获取该座舱内的摄像头采集的图像。
可选地,在检测到该第一区域存在用户时,获取该座舱内的摄像头采集的图像,包括:在检测到该第一区域存在用户,车辆处于驻车档且抬头显示装置开启时,获取该座舱内的摄像头采集的该图像。
示例性的,以该运载工具是车辆为例,该第一区域可以为车辆中的主驾区域。在检测到用户位于主驾区域,车辆处于驻车档且抬头显示装置开启时,可以获取该图像。
一个实施例中,获取该座舱内的摄像头采集的图像,包括:获取该用户的图像,或者,获取包括整个该第一区域在内的图像,或者,获取包括部分该第一区域在内的图像。
在一些可能的实现方式中,检测到该第一区域存在用户,包括:检测到第一区域中从不存在用户 到存在用户。
可选地,在检测到该第一区域存在用户时,获取该座舱内的摄像头采集的图像,包括:在检测到位于第一区域中的用户调节座椅,车辆处于驻车档且抬头显示装置开启时,获取该座舱内的摄像头采集的图像。
示例性的,以该运载工具是车辆为例,该第一区域可以为车辆中的主驾区域。在检测到位于主驾区域的用户调节座椅的操作,车辆处于驻车档且抬头显示装置开启时,可以获取该图像。
可选地,该摄像头为DMS或者CMS的摄像头。
S920,根据该图像,确定该用户的眼部位置信息。
示例性的,基于图像分割算法,可以从该图像中获取用户的眼部位置信息。通过转化矩阵可以将该眼部位置信息转换到与HUD同一坐标系下,从而可以确定眼部位置信息对应的HUD呈现虚像的高度。
可选地,该眼部位置信息可以是一个或者多个眼部位置信息。
例如,该眼部位置信息可以为根据某一个时刻获取的图像确定的一个眼部位置信息,或者,也可以是根据某个时间段内获取的图像确定的一个眼部位置信息,或者,还可以是根据某个时间段内获取的图像确定的多个眼部位置信息。
又例如,在该眼部位置信息为一个眼部位置信息时,可以根据该眼部位置信息对该抬头显示装置进行调节;或者,在该眼部位置信息为多个眼部位置信息时,可以根据该多个眼部位置信息对该抬头显示装置进行调节。
以上S910和S920中是以通过座舱内的摄像头采集图像并根据该图像确定眼部位置信息为例进行说明的,本申请实施例并不限于此。例如,还可以通过座舱内的其他传感器采集的数据确定用户的眼部位置信息。如可以通过座舱内的激光雷达采集的点云数据确定用户的眼部位置信息;又如通过座舱内的毫米波雷达采集的点云数据确定用户的眼部位置信息。
S930,在检测到该用户的第一操作时,根据该眼部位置信息,对该抬头显示装置进行调节。
可选地,该在检测到该用户的第一操作时,根据该眼部位置信息,对该抬头显示装置进行调节,包括:在检测到该用户踩刹车踏板的操作时,根据该眼部位置信息,对该抬头显示装置进行调节。
示例性的,如图3中的(a)和(b)所示,在检测到用户踩下刹车踏板的操作时,车辆可以根据用户的眼睛位置对HUD进行调节。
可选地,该在检测到该用户的第一操作时,根据该眼部位置信息,对该抬头显示装置进行调节,包括:在检测到该用户将档位从驻车档调整至其他档位的操作时,根据该眼部位置信息,对该抬头显示装置进行调节。
可选地,在检测到该用户将档位从驻车档调整至其他档位的操作时,根据该眼部位置信息,对该抬头显示装置进行调节,包括:在检测到该用户将档位从驻车档调整至前进档的操作时,根据该眼部位置信息,对该抬头显示装置进行调节。
可选地,该根据该图像,确定用户的眼部位置信息,包括:根据第一预设时长内获取的该图像,确定该眼部位置信息,其中,该第一预设时长的结束时刻为检测到该第一操作的时刻。
可选地,该根据该图像,确定用户的眼部位置信息,包括:根据检测到用户踩下刹车踏板的操作时获取的图像,确定该眼部位置信息。
可选地,该方法900还包括:在检测到该用户的第一操作时,控制该抬头显示装置显示提示信息,该提示信息用于提示该用户该抬头显示装置在进行调节。
示例性的,如图3中的(b)所示,在检测到用户踩下刹车踏板的操作时,可以控制抬头显示装置显示调节框301,其中,调节框301中显示信息“正在自动匹配高度”。用户在看到调节框301以及调节框301中的信息后,可以获知车辆当前正在自动调节HUD。
示例性的,如图4中的(a)所示,在检测到用户踩下刹车踏板的操作时,可以控制抬头显示装置显示调节框301,其中,调节框301中显示导航信息。用户在看到调节框301后,可以获知车辆当前正在自动调节HUD。
可选地,该方法900还包括:在完成对该抬头显示装置的调节时,停止获取该图像。
以上停止获取该图像也可以理解为停止获取该用户的图像,或者,还可以理解为停止获取该第一 区域的图像,或者,还可以理解为停止调节HUD。
可选地,该在检测到该第一区域存在用户时,获取该座舱内的摄像头采集的图像,包括:在检测到该第一区域存在该用户且检测到该用户调节该第一区域中座椅的操作时,获取该摄像头采集的该图像;该在检测到该用户的第一操作时,根据该眼部位置信息,对该抬头显示装置进行调节,包括:在检测到该用户调节该座椅的过程中,根据该眼部位置信息的变化,对该抬头显示装置进行调节。
示例性的,如图5中的(b)和(c)所示,在检测到用户将座椅的位置向前调节的过程中,车辆可以根据眼睛位置的变化,对HUD进行实时调节。
以上对HUD进行实时调节也可以理解为在调节座椅的过程中,根据不同时刻确定的眼部位置信息,对HUD进行调节,或者,在调节座椅的过程中,根据眼部位置信息的变化,对HUD进行调节。
例如,以该眼部位置信息是眼睛位置为例,在检测到用户调节座椅的过程中的T1时刻,根据获取的图像中人脸信息,确定用户的眼睛位置为位置1,从而可以根据该位置1将HUD呈现的虚像的高度调节至高度1,其中,位置1与高度1之间具有对应关系。又例如,在检测到用户调节座椅的过程中的T2时刻,根据获取的图像中人脸信息,确定用户的眼睛位置为位置2,从而可以根据该位置2将HUD呈现的虚像的高度调节至高度2,其中,位置2与高度2之间具有对应关系。
可选地,该方法900还包括:在检测到该用户调节该座椅的过程中,控制该抬头显示装置显示提示信息,该提示信息用于提示该用户该抬头显示装置在进行调节。
示例性的,如图5中的(b)和(c)所示,在用户调节该座椅的过程中,可以控制HUD显示调节框301,其中,调节框301中包括信息“正在自动匹配高度”。用户在看到调节框301后,可以获知车辆当前正在自动调节HUD。
可选地,该方法900还包括:在检测到该用户停止调节该座椅的操作时,停止对该抬头显示装置的调节。
以上停止对该抬头显示装置的调节也可以理解为停止获取图像,或者,还可以理解为停止获取该用户的图像,或者,还可以理解为停止获取该第一区域的图像。
可选地,该方法900还包括:在检测到该用户停止调节该座椅的操作起的第二预设时长内,根据该眼部位置信息的变化,对该抬头显示装置进行调节。
示例性的,如图6中的(a)所示,在检测到用户停止调节座椅的操作起的3s内,车辆还可以根据用户的眼睛位置的变化,对该抬头显示装置进行调节。在这3s期间内,用户还可以对自己的坐姿进行调节,从而使得最终HUD呈现虚像的高度满足用户的习惯。
可选地,该方法900还包括:在该第二预设时长结束时,停止对该抬头显示装置的调节。
示例性的,如图6中的(b)所示,在检测到3s倒计时结束时,可以停止对HUD的调节。车辆可以根据3s倒计时结束时刻用户的眼睛位置为最终的眼睛位置,从而对HUD进行调节。调节完成后停止对HUD进行调节,或者,调节完成后停止采集图像。
可选地,该根据该图像,确定用户的眼部位置信息,包括:过滤该图像中的干扰数据;根据过滤该干扰数据后的该图像,确定该眼部位置信息。
可选地,该干扰数据包括但不限于用户低头看刹车、换鞋、看左侧或者右侧的后视镜或者抬头看车内后视镜时采集的图像。
可选地,该根据该眼部位置信息,对该抬头显示装置进行调节,包括:根据该眼部位置信息,确定该抬头显示装置呈现虚像的高度;根据该高度,对该抬头显示装置进行调节。
可选地,该运载工具为车辆,该第一区域包括主驾区域或者副驾区域。
本申请实施例还提供用于实现以上任一种方法的装置,例如,提供一种装置包括用以实现以上任一种方法中运载工具(例如,车辆),或者,车辆中的计算平台,或者,计算平台中的SOC,或者,计算平台中的处理器所执行的各步骤的单元(或手段)。
图10示出了本申请实施例提供的一种调节装置1000的示意性框图。该装置1000用于调节运载工具座舱内的抬头显示装置,该座舱内包括第一区域,该抬头显示装置用于向该第一区域中的用户显示信息。如图10所示,该装置1000包括:检测单元1010,用于检测该第一区域是否存在用户;获取单元1020,用于在该检测单元1010检测到该第一区域存在用户时,获取该座舱内的摄像头采集的图像;确定单元1030,用于根据该图像,确定该用户的眼部位置信息;调节单元1040,用于在该检测单元1010 检测到该用户的第一操作时,根据该眼部位置信息,对该抬头显示装置进行调节。
可选地,该调节单元1040,用于:在该检测单元1010检测到该用户踩刹车踏板的操作时,根据该眼部位置信息,对该抬头显示装置进行调节。
可选地,该调节单元1040,用于:在该检测单元1010检测到该用户将档位从驻车档调整至其他档位的操作时,根据该眼部位置信息,对该抬头显示装置进行调节。
可选地,该确定单元1030,用于:根据该获取单元1020在第一预设时长内获取的该图像,确定该眼部位置信息,其中,该第一预设时长的结束时刻为检测到该第一操作的时刻。
可选地,该装置1000还包括:控制单元,用于在该检测单元1010检测到该用户的第一操作时,控制该抬头显示装置显示提示信息,该提示信息用于提示该用户该抬头显示装置在进行调节。
可选地,该获取单元1020,还用于:在完成对该抬头显示装置的调节时,停止获取该图像。
可选地,该获取单元1020,用于:在该检测单元1010检测到该第一区域存在该用户且检测到该用户调节该第一区域中座椅的操作时,获取该摄像头采集的该图像;该调节单元1040,用于:在该检测单元1010检测到该用户调节该座椅的过程中,根据该眼部位置信息的变化,对该抬头显示装置进行调节。
可选地,该装置1000还包括:控制单元,用于在该检测单元1010检测到该用户调节该座椅的过程中,控制该抬头显示装置显示提示信息,该提示信息用于提示该用户该抬头显示装置在进行调节。
可选地,该调节单元1040,还用于:在该检测单元1010检测到该用户停止调节该座椅的操作时,停止对该抬头显示装置的调节。
可选地,该调节单元1040,还用于:在该检测单元1010检测到该用户停止调节该座椅的操作起的第二预设时长内,根据该眼部位置信息的变化,对该抬头显示装置进行调节。
可选地,该调节单元1040,还用于:在该第二预设时长结束时,停止对该抬头显示装置的调节。
可选地,该确定单元1030,用于:过滤该图像中的干扰数据;根据过滤该干扰数据后的该图像,确定该眼部位置信息。
可选地,该确定单元1030,还用于:根据该眼部位置信息,确定该抬头显示装置呈现虚像的高度;该调节单元1040,用于:根据该高度,对该抬头显示装置进行调节。
可选地,该运载工具为车辆,该第一区域包括主驾区域或者副驾区域。
例如,检测单元1010可以是图1中的计算平台或者计算平台中的处理电路、处理器或者控制器。以检测单元1010为计算平台中的处理器151为例,处理器151可以获取第一区域中座椅下的压力传感器采集的压力值,从而根据该压力值的变化确定该第一区域是否存在用户。
又例如,获取单元1020可以是图1中的计算平台或者计算平台中的处理电路、处理器或者控制器。以获取单元1020为计算平台中的处理器152为例,处理器152可以在处理器151确定第一区域存在该用户时,控制座舱内的摄像头启动并且获取摄像头采集的图像。
又例如,确定单元1030可以是图1中的计算平台或者计算平台中的处理电路、处理器或者控制器。以确定单元1030为计算平台中的处理器153为例,处理器153可以获取处理器152发送的图像,并且根据图像分割算法确定用户的眼部位置信息。
可选地,获取单元1020和确定单元1030所实现的功能可以由同一处理器实现。
又例如,调节单元1040可以是图1中的计算平台或者计算平台中的处理电路、处理器或者控制器。以调节单元1040为计算平台中的处理器15n为例,处理器15n可以根据处理器153确定的眼部位置信息,调节如图2所示的第一反射组件230和/或第二反射组件240,从而实现对HUD呈现虚像的高度的调节。
以上检测单元1010所实现的功能、获取单元1020所实现的功能、确定单元1030所实现的功能和调节单元1040所实现的功能可以分别由不同的处理器实现,或者,也可以是部分功能由相同的处理器实现,或者,还可以所有功能均由相同的处理器实现,本申请实施例对此不作限定。
应理解以上装置中各单元的划分仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。此外,装置中的单元可以以处理器调用软件的形式实现;例如装置包括处理器,处理器与存储器连接,存储器中存储有指令,处理器调用存储器中存储的指令,以实现以上任一种方法或实现该装置各单元的功能,其中处理器例如为通用处理器,例如CPU或微处理器,存 储器为装置内的存储器或装置外的存储器。或者,装置中的单元可以以硬件电路的形式实现,可以通过对硬件电路的设计实现部分或全部单元的功能,该硬件电路可以理解为一个或多个处理器;例如,在一种实现中,该硬件电路为ASIC,通过对电路内元件逻辑关系的设计,实现以上部分或全部单元的功能;再如,在另一种实现中,该硬件电路为可以通过PLD实现,以FPGA为例,其可以包括大量逻辑门电路,通过配置文件来配置逻辑门电路之间的连接关系,从而实现以上部分或全部单元的功能。以上装置的所有单元可以全部通过处理器调用软件的形式实现,或全部通过硬件电路的形式实现,或部分通过处理器调用软件的形式实现,剩余部分通过硬件电路的形式实现。
在本申请实施例中,处理器是一种具有信号的处理能力的电路,在一种实现中,处理器可以是具有指令读取与运行能力的电路,例如CPU、微处理器、GPU、或DSP等;在另一种实现中,处理器可以通过硬件电路的逻辑关系实现一定功能,该硬件电路的逻辑关系是固定的或可以重构的,例如处理器为ASIC或PLD实现的硬件电路,例如FPGA。在可重构的硬件电路中,处理器加载配置文档,实现硬件电路配置的过程,可以理解为处理器加载指令,以实现以上部分或全部单元的功能的过程。此外,还可以是针对人工智能设计的硬件电路,其可以理解为一种ASIC,例如NPU、TPU、DPU等。
可见,以上装置中的各单元可以是被配置成实施以上方法的一个或多个处理器(或处理电路),例如:CPU、GPU、NPU、TPU、DPU、微处理器、DSP、ASIC、FPGA,或这些处理器形式中至少两种的组合。
此外,以上装置中的各单元可以全部或部分可以集成在一起,或者可以独立实现。在一种实现中,这些单元集成在一起,以SOC的形式实现。该SOC中可以包括至少一个处理器,用于实现以上任一种方法或实现该装置各单元的功能,该至少一个处理器的种类可以不同,例如包括CPU和FPGA,CPU和人工智能处理器,CPU和GPU等。
本申请实施例还提供了一种装置,该装置包括处理单元和存储单元,其中存储单元用于存储指令,处理单元执行存储单元所存储的指令,以使该装置执行上述实施例执行的方法或者步骤。
可选地,若该装置位于车辆中,上述处理单元可以是图1所示的处理器151-15n。
图11示出了本申请实施例提供的调节系统1100的示意性框图。如图11所示,该调节系统1100中包括抬头显示装置和计算平台,其中,该计算平台可以包括上述调节装置1000。
本申请实施例还提供了一种运载工具,该运载工具可以包括上述调节装置1000或者调节系统1100。
可选地,该运载工具可以为车辆。
本申请实施例还提供了一种计算机程序产品,所述计算机程序产品包括:计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述方法。
本申请实施例还提供了一种计算机可读介质,所述计算机可读介质存储有程序代码,当所述计算机程序代码在计算机上运行时,使得计算机执行上述方法。
在实现过程中,上述方法的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者上电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。为避免重复,这里不再详细描述。
应理解,本申请实施例中,该存储器可以包括只读存储器和随机存取存储器,并向处理器提供指令和数据。
还应理解,在本申请的各种实施例中,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式 实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖。在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (34)

  1. 一种调节方法,其特征在于,应用于运载工具的座舱,所述座舱内包括第一区域,所述座舱内设置有抬头显示装置,所述抬头显示装置用于向所述第一区域中的用户显示信息,所述方法包括:
    在检测到所述第一区域存在用户时,获取所述座舱内的摄像头采集的图像;
    根据所述图像,确定所述用户的眼部位置信息;
    在检测到所述用户的第一操作时,根据所述眼部位置信息,对所述抬头显示装置进行调节。
  2. 根据权利要求1所述的方法,其特征在于,所述在检测到所述用户的第一操作时,根据所述眼部位置信息,对所述抬头显示装置进行调节,包括:
    在检测到所述用户踩刹车踏板的操作时,根据所述眼部位置信息,对所述抬头显示装置进行调节。
  3. 根据权利要求1所述的方法,其特征在于,所述在检测到所述用户的第一操作时,根据所述眼部位置信息,对所述抬头显示装置进行调节,包括:
    在检测到所述用户将档位从驻车档调整至其他档位的操作时,根据所述眼部位置信息,对所述抬头显示装置进行调节。
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述根据所述图像,确定所述用户的眼部位置信息,包括:
    根据第一预设时长内获取的所述图像,确定所述眼部位置信息,其中,所述第一预设时长的结束时刻为检测到所述第一操作的时刻。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,所述方法还包括:
    在检测到所述用户的第一操作时,控制所述抬头显示装置显示提示信息,所述提示信息用于提示所述用户所述抬头显示装置在进行调节。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述方法还包括:
    在完成对所述抬头显示装置的调节时,停止获取所述图像。
  7. 根据权利要求1所述的方法,其特征在于,所述在检测到所述第一区域存在用户时,获取所述座舱内的摄像头采集的图像,包括:
    在检测到所述第一区域存在所述用户且检测到所述用户调节所述第一区域中座椅的操作时,获取所述图像;
    所述在检测到所述用户的第一操作时,根据所述眼部位置信息,对所述抬头显示装置进行调节,包括:
    在检测到所述用户调节所述座椅的过程中,根据所述眼部位置信息的变化,对所述抬头显示装置进行调节。
  8. 根据权利要求7所述的方法,其特征在于,所述方法还包括:
    在检测到所述用户调节所述座椅的过程中,控制所述抬头显示装置显示提示信息,所述提示信息用于提示所述用户所述抬头显示装置在进行调节。
  9. 根据权利要求7或8所述的方法,其特征在于,所述方法还包括:
    在检测到所述用户停止调节所述座椅的操作时,停止对所述抬头显示装置的调节。
  10. 根据权利要求7或8所述的方法,其特征在于,所述方法还包括:
    在检测到所述用户停止调节所述座椅的操作起的第二预设时长内,根据所述眼部位置信息的变化,对所述抬头显示装置进行调节。
  11. 根据权利要求10所述的方法,其特征在于,所述方法还包括:
    在所述第二预设时长结束时,停止对所述抬头显示装置的调节。
  12. 根据权利要求1至11中任一项所述的方法,其特征在于,所述根据所述图像,确定所述用户的眼部位置信息,包括:
    过滤所述图像中的干扰数据;
    根据过滤所述干扰数据后的所述图像,确定所述眼部位置信息。
  13. 根据权利要求1至12中任一项所述的方法,其特征在于,所述根据所述眼部位置信息,对所 述抬头显示装置进行调节,包括:
    根据所述眼部位置信息,确定所述抬头显示装置呈现虚像的高度;
    根据所述高度,对所述抬头显示装置进行调节。
  14. 根据权利要求1至13中任一项所述的方法,其特征在于,所述运载工具为车辆,所述第一区域包括主驾区域或者副驾区域。
  15. 一种调节装置,其特征在于,所述装置用于调节运载工具座舱内的抬头显示装置,所述座舱内包括第一区域,所述抬头显示装置用于向所述第一区域中的用户显示信息,所述装置包括:
    检测单元,用于检测所述第一区域是否存在用户;
    获取单元,用于在所述检测单元检测到所述第一区域存在用户时,获取所述座舱内的摄像头采集的图像;
    确定单元,用于根据所述图像,确定所述用户的眼部位置信息;
    调节单元,用于在所述检测单元检测到所述用户的第一操作时,根据所述眼部位置信息,对所述抬头显示装置进行调节。
  16. 根据权利要求15所述的装置,其特征在于,所述调节单元,用于:
    在所述检测单元检测到所述用户踩刹车踏板的操作时,根据所述眼部位置信息,对所述抬头显示装置进行调节。
  17. 根据权利要求15所述的装置,其特征在于,所述调节单元,用于:
    在所述检测单元检测到所述用户将档位从驻车档调整至其他档位的操作时,根据所述眼部位置信息,对所述抬头显示装置进行调节。
  18. 根据权利要求15至17中任一项所述的装置,其特征在于,所述确定单元,用于:
    根据所述获取单元在第一预设时长内获取的所述图像,确定所述眼部位置信息,其中,所述第一预设时长的结束时刻为检测到所述第一操作的时刻。
  19. 根据权利要求15至18中任一项所述的装置,其特征在于,所述装置还包括:
    控制单元,用于在所述检测单元检测到所述用户的第一操作时,控制所述抬头显示装置显示提示信息,所述提示信息用于提示所述用户所述抬头显示装置在进行调节。
  20. 根据权利要求15至19中任一项所述的装置,其特征在于,所述获取单元,还用于:
    在完成对所述抬头显示装置的调节时,停止获取所述图像。
  21. 根据权利要求15所述的装置,其特征在于,所述获取单元,用于:
    在所述检测单元检测到所述第一区域存在所述用户且检测到所述用户调节所述第一区域中座椅的操作时,获取所述摄像头采集的所述图像;
    所述调节单元,用于:
    在所述检测单元检测到所述用户调节所述座椅的过程中,根据所述眼部位置信息的变化,对所述抬头显示装置进行调节。
  22. 根据权利要求21所述的装置,其特征在于,所述装置还包括:
    控制单元,用于在所述检测单元检测到所述用户调节所述座椅的过程中,控制所述抬头显示装置显示提示信息,所述提示信息用于提示所述用户所述抬头显示装置在进行调节。
  23. 根据权利要求21或22所述的装置,其特征在于,所述调节单元,还用于:
    在所述检测单元检测到所述用户停止调节所述座椅的操作时,停止对所述抬头显示装置的调节。
  24. 根据权利要求21或22所述的装置,其特征在于,所述调节单元,还用于:
    在所述检测单元检测到所述用户停止调节所述座椅的操作起的第二预设时长内,根据所述眼部位置信息的变化,对所述抬头显示装置进行调节。
  25. 根据权利要求24所述的装置,其特征在于,所述调节单元,还用于:
    在所述第二预设时长结束时,停止对所述抬头显示装置的调节。
  26. 根据权利要求15至25中任一项所述的装置,其特征在于,所述确定单元,用于:
    过滤所述图像中的干扰数据;
    根据过滤所述干扰数据后的所述图像,确定所述眼部位置信息。
  27. 根据权利要求15至26中任一项所述的装置,其特征在于,所述确定单元,还用于:
    根据所述眼部位置信息,确定所述抬头显示装置呈现虚像的高度;
    所述调节单元,用于:
    根据所述高度,对所述抬头显示装置进行调节。
  28. 根据权利要求15至27中任一项所述的装置,其特征在于,所述运载工具为车辆,所述第一区域包括主驾区域或者副驾区域。
  29. 一种装置,其特征在于,包括:
    存储器,用于存储计算机程序;
    处理器,用于执行所述存储器中存储的计算机程序,以使得所述装置执行如权利要求1至14中任一项所述的方法。
  30. 一种调节系统,其特征在于,包括抬头显示装置和计算平台,其中,所述计算平台包括如权利要求15至29中任一项所述的装置。
  31. 一种运载工具,其特征在于,包括如权利要求15至29中任一项的装置,或者,包括如权利要求30所述的系统。
  32. 根据权利要求31所述的运载工具,其特征在于,所述运载工具为车辆。
  33. 一种计算机可读存储介质,其特征在于,其上存储有计算机程序,所述计算机程序被计算机执行时,以使得实现如权利要求1至14中任一项所述的方法。
  34. 一种芯片,其特征在于,所述芯片包括处理器与数据接口,所述处理器通过所述数据接口读取存储器上存储的指令,以执行如权利要求1至14中任一项所述的方法。
PCT/CN2023/117021 2022-09-05 2023-09-05 一种调节方法、装置和运载工具 WO2024051691A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211078290.2A CN115586648A (zh) 2022-09-05 2022-09-05 一种调节方法、装置和运载工具
CN202211078290.2 2022-09-05

Publications (1)

Publication Number Publication Date
WO2024051691A1 true WO2024051691A1 (zh) 2024-03-14

Family

ID=84771088

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/117021 WO2024051691A1 (zh) 2022-09-05 2023-09-05 一种调节方法、装置和运载工具

Country Status (2)

Country Link
CN (1) CN115586648A (zh)
WO (1) WO2024051691A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115586648A (zh) * 2022-09-05 2023-01-10 华为技术有限公司 一种调节方法、装置和运载工具

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090237803A1 (en) * 2008-03-21 2009-09-24 Kabushiki Kaisha Toshiba Display device, display method and head-up display
CN108621947A (zh) * 2018-05-04 2018-10-09 福建省汽车工业集团云度新能源汽车股份有限公司 一种自适应调节的车载抬头显示系统
CN209534920U (zh) * 2019-01-07 2019-10-25 上汽通用汽车有限公司 Hud角度调整系统及车辆
CN114022565A (zh) * 2021-10-28 2022-02-08 虹软科技股份有限公司 用于显示设备的对齐方法及对齐装置、车载显示系统
CN115586648A (zh) * 2022-09-05 2023-01-10 华为技术有限公司 一种调节方法、装置和运载工具

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090237803A1 (en) * 2008-03-21 2009-09-24 Kabushiki Kaisha Toshiba Display device, display method and head-up display
CN108621947A (zh) * 2018-05-04 2018-10-09 福建省汽车工业集团云度新能源汽车股份有限公司 一种自适应调节的车载抬头显示系统
CN209534920U (zh) * 2019-01-07 2019-10-25 上汽通用汽车有限公司 Hud角度调整系统及车辆
CN114022565A (zh) * 2021-10-28 2022-02-08 虹软科技股份有限公司 用于显示设备的对齐方法及对齐装置、车载显示系统
CN115586648A (zh) * 2022-09-05 2023-01-10 华为技术有限公司 一种调节方法、装置和运载工具

Also Published As

Publication number Publication date
CN115586648A (zh) 2023-01-10

Similar Documents

Publication Publication Date Title
US10317900B2 (en) Controlling autonomous-vehicle functions and output based on occupant position and attention
CN107798895B (zh) 停止的车辆交通恢复警报
CN108202600B (zh) 用于车辆的驾驶辅助装置
US10565460B1 (en) Apparatuses, systems and methods for classifying digital images
JP5088669B2 (ja) 車両周辺監視装置
US9376121B2 (en) Method and display unit for displaying a driving condition of a vehicle and corresponding computer program product
WO2024051691A1 (zh) 一种调节方法、装置和运载工具
CN113228620B (zh) 一种图像的获取方法以及相关设备
WO2018004858A2 (en) Road condition heads up display
US20180253106A1 (en) Periphery monitoring device
JP7075189B2 (ja) 運転席と少なくとも一つの乗員席とを備える車両、及び、交代運転者及び又は少なくとも一人の同乗者に現在経験されている運転状況についての情報を提供する方法
US20150124097A1 (en) Optical reproduction and detection system in a vehicle
CN107010077B (zh) 用于将信息传输给机动车的驾驶员的方法和适应性的驾驶员协助系统
WO2024104045A1 (zh) 基于座舱区域获取操作指示的方法、显示方法及相关设备
WO2023216580A1 (zh) 调节显示设备的方法和装置
US20230001947A1 (en) Information processing apparatus, vehicle, and information processing method
US11828947B2 (en) Vehicle and control method thereof
KR20200082456A (ko) 차량 내에서의 건강정보 관리 방법 및 장치
CN111469849B (zh) 车辆控制装置和车辆
WO2024032149A1 (zh) 一种显示方法、控制装置和车辆
US20230150528A1 (en) Information display device
US20240083246A1 (en) Display control device, display control method, non-transitory computer-readable medium storing a program
US11975609B1 (en) Systems and methods for selective steering wheel transparency and control of the same
US11919532B2 (en) System matching driver intent with forward-reverse gear setting
US11897496B2 (en) Vehicle warning system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23862385

Country of ref document: EP

Kind code of ref document: A1