WO2023240667A1 - 泊车交互方法、装置、存储介质、电子设备及车辆 - Google Patents

泊车交互方法、装置、存储介质、电子设备及车辆 Download PDF

Info

Publication number
WO2023240667A1
WO2023240667A1 PCT/CN2022/100663 CN2022100663W WO2023240667A1 WO 2023240667 A1 WO2023240667 A1 WO 2023240667A1 CN 2022100663 W CN2022100663 W CN 2022100663W WO 2023240667 A1 WO2023240667 A1 WO 2023240667A1
Authority
WO
WIPO (PCT)
Prior art keywords
parking
image
target
virtual
vehicle component
Prior art date
Application number
PCT/CN2022/100663
Other languages
English (en)
French (fr)
Inventor
张云洪
陈芳
脱悦
刘璐
Original Assignee
魔门塔(苏州)科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 魔门塔(苏州)科技有限公司 filed Critical 魔门塔(苏州)科技有限公司
Priority to US18/482,086 priority Critical patent/US20240034346A1/en
Publication of WO2023240667A1 publication Critical patent/WO2023240667A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/06Automatic manoeuvring for parking
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/18Information management
    • B60K2360/191Highlight information
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/20Optical features of instruments
    • B60K2360/31Virtual images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/29Instruments characterised by the way in which information is handled, e.g. showing information on plural displays or prioritising information according to driving conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/14Means for informing the driver, warning the driver or prompting a driver intervention
    • B60W2050/146Display means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/408Radar; Laser, e.g. lidar
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • G01S2013/9314Parking operations

Definitions

  • the present application relates to the field of automotive technology, specifically, to a parking interaction method, device, storage medium, electronic equipment and vehicle.
  • the automatic parking system is an important part of the advanced driving assistance system or autonomous driving system. After the automatic parking system is turned on, it can use sensors to identify obstacles and available parking spaces around the vehicle (also known as available parking spaces), and generate a virtual parking image including the vehicle, surrounding vehicles and available parking spaces, that is, VR (Virtual Parking). Reality, virtual reality) image.
  • VR Virtual Parking
  • the driver can click on the VR image to select the target parking space he wants to park into, and park according to the target parking space.
  • the driver needs to compare the VR image identifying the target parking space with the real physical world to determine the position of the target parking space in the VR image in the displayed physical world. Only then can you park in your target parking space with confidence and safety.
  • This application provides a parking interaction method, device, storage medium, electronic equipment and vehicle, which can solve the following technical problems: during the parking process, the driver needs to compare the VR image marking the target parking space with the real physical world. Yes, only by determining the position of the target parking space in the VR image in the displayed physical world can we safely and safely park into the target parking space. However, due to differences in visual presentation between VR images and the real physical world, we will Comparing the two will consume more time and bring comparison anxiety to the driver, which will lead to problems such as reduced parking efficiency and reduced reliability of the automatic parking system.
  • the output displays the currently generated virtual parking image and its corresponding real parking image
  • the parking space selection instruction includes the first position information of the target parking space to be parked in the virtual parking image
  • the target parking space is highlighted in the virtual parking image according to the first position information, and the target parking space is highlighted in the real parking image according to the second position information.
  • the method before determining the position information at the target depth information as the second position information, the method further includes:
  • the embodiment of the present application can achieve distance (or depth) matching through the target radar point cloud corresponding to the first position information in the virtual parking image and the depth map corresponding to the real parking image, and obtain the target radar point cloud corresponding to Target depth information, and only when the target depth information is determined to be the depth information of a candidate parking space in the real-scene parking image, the position information at the target depth information is determined to be the target parking space in the real-scene parking image.
  • the second location information can further ensure that there is a parking space at the determined second location information, rather than other locations, such as parked parking spaces, driveways, other open spaces, etc., thus improving the accuracy of determining the second location information. Rate.
  • the depth information of the candidate parking space does not include the target depth information, determine the first reference object characteristics based on the radar point cloud around the target radar point cloud;
  • the location information of the target candidate parking space is determined as the second location information, and the target candidate parking space is a candidate parking space corresponding to the second reference object feature that successfully matches the first reference object feature.
  • the second position information when the second position information cannot be accurately obtained based on distance (or depth) matching, the second position information can be determined through reference object feature matching, thereby further improving the accuracy of determining the second position information. Accuracy.
  • the virtual state of the first vehicle component of the self-vehicle model in the initially generated virtual parking image can also be adjusted. is the current physical state of the first vehicle component, and when outputting the virtual parking image after the display state is adjusted, the first vehicle component is highlighted, thereby not only keeping the state of the self-vehicle model consistent with the vehicle in the real world, but also giving the driver
  • the driver's immersive experience can also enable the driver to combine the first parking prompt information with the highlighted first vehicle component, and intuitively and quickly see which first vehicle component does not meet the automatic parking conditions and how to adjust it, and then The driver's efficiency in adjusting the first vehicle component is increased, thereby improving parking efficiency.
  • the virtual state of the second vehicle component of the self-vehicle model in the virtual parking image is updated as the physical state of the second vehicle component changes, and the updated virtual parking is displayed.
  • the image highlight the second vehicle component, which is the vehicle component whose entity state changes visually during the parking process; and/or,
  • the third vehicle component When there is a third vehicle component in the parking stage, when displaying the currently generated virtual parking image, the third vehicle component is highlighted and the second parking prompt message is output.
  • the third vehicle component indicates that a parking operation error has occurred.
  • the vehicle component, the second parking prompt information is used to prompt the driver of parking operation errors.
  • the output displays the currently generated virtual parking image
  • the real parking image is located on an upper layer of the virtual parking image, and the display positions of the real parking image and the virtual parking image do not overlap.
  • a parking interaction device which includes:
  • a determination unit configured to determine the second position information of the target parking space in the real parking image based on the correspondence between the virtual parking image and the real parking image;
  • a highlighting unit is configured to highlight the target parking space in the virtual parking image according to the first position information, and to highlight the target parking space in the real parking image according to the second position information.
  • the embodiment of the present application can simultaneously display a virtual parking image and its corresponding real parking image, and when the driver selects the target parking space in the virtual parking image, the driver can also display the virtual parking image and the corresponding real parking space in the virtual parking image.
  • the target parking space is highlighted in the real-scene parking image, so that the driver can intuitively obtain the corresponding relationship between the target parking space in the virtual parking image and the real-scene parking image, without having to waste energy on converting the virtual parking space into a virtual parking space.
  • the image is compared with the real physical world, which not only improves parking efficiency, but also improves the reliability of the automatic parking system.
  • the determination unit includes:
  • the acquisition module is used to obtain the target radar point cloud corresponding to the first position information, and obtain the depth map corresponding to the real parking image;
  • the search module is used to find the target depth information corresponding to the target radar point cloud from the depth map;
  • the first determination module is used to determine the position information at the target depth information as the second position information.
  • An identification module configured to identify available parking spaces in the real-scene parking image as candidate available parking spaces based on a computer vision algorithm before determining the position information at the target depth information as the second position information;
  • the first determination module is configured to determine the position information at the target depth information as the second position information when the depth information of the candidate parking space contains the target depth information.
  • the determining unit further includes:
  • the second determination module is used to determine the first reference object characteristics based on the radar point cloud around the target radar point cloud when the depth information of the candidate parking space does not contain the target depth information;
  • the identification module is also used to identify the second reference object characteristics around each candidate parking space based on a computer vision algorithm
  • the highlighting unit is also used to highlight the target parking space in the virtual parking image according to the first position information, and to park in the real scene according to the second position information. After the target parking space is highlighted in the image, during the parking stage, the planned path from the vehicle to the second location information is highlighted in the real parking image.
  • the output display unit is used to determine whether there is a first vehicle component after receiving an instruction to start the automatic parking system.
  • the first vehicle component is in a current physical state that does not satisfy the automatic parking system.
  • the vehicle component of the parking condition in the case where the first vehicle component exists, adjust the virtual state of the first vehicle component of the self-vehicle model in the initially generated virtual parking image to the current physical state of the first vehicle component; output display A state-adjusted virtual parking image with the first vehicle component highlighted;
  • the installation also includes:
  • the first output prompt unit is used to output first parking prompt information.
  • the first parking prompt information is used to prompt the driver to adjust the current physical state of the first vehicle component to a state that satisfies automatic parking conditions.
  • the highlighting unit is also used to highlight the target parking space in the virtual parking image according to the first position information, and to park in the real scene according to the second position information.
  • the virtual state of the second vehicle component of the self-vehicle model in the virtual parking image is updated as the physical state of the second vehicle component changes.
  • highlight the second vehicle component when displaying the updated virtual parking image, highlight the second vehicle component.
  • the second vehicle component is the vehicle component whose physical state changes visually during the parking process; and/or there is a third vehicle component during the parking stage.
  • the third vehicle part is highlighted, and the third vehicle part is the vehicle part where the parking operation error occurs;
  • the installation also includes:
  • the second output prompt unit is used to output second parking prompt information when the third vehicle component is highlighted, and the second parking prompt information is used to prompt the driver of parking operation errors.
  • the output display unit includes:
  • the second output display module is used to output and display the currently generated real-scene parking image on the basis of outputting and displaying the currently generated virtual parking image when the library search instruction is received or the gear is switched to the R gear.
  • the real parking image is located on an upper layer of the virtual parking image, and the display positions of the real parking image and the virtual parking image do not overlap.
  • embodiments of the present application provide a storage medium on which a computer program is stored.
  • the program is executed by a processor, the method described in any possible implementation of the first aspect is implemented.
  • inventions of the present application provide an electronic device.
  • the electronic device includes:
  • processors one or more processors
  • the electronic device When one or more programs are executed by one or more processors, the electronic device is caused to implement the method described in any possible implementation manner of the first aspect.
  • embodiments of the present application provide a vehicle, which includes the device as described in any possible implementation of the second aspect, or the electronic device as described in the fourth aspect.
  • Figure 1 is a schematic flow chart of a parking interaction method provided by an embodiment of the present application.
  • Figure 2 is an image example of parking interaction provided by an embodiment of the present application.
  • Figure 3 is an image example of another parking interaction provided by an embodiment of the present application.
  • Figure 1 is a schematic flow chart of a parking interaction method. This method can be applied to electronic devices or computer devices. Specifically, it can be applied to vehicles or other electronic devices (such as mobile terminals) that interact with vehicles. The method can include the following steps:
  • the virtual parking image uses radar to identify obstacles and available parking spaces around the vehicle, and generates a VR image including the vehicle, occupied parking spaces, and available parking spaces.
  • the real parking image is the front (or rear) of the vehicle captured by the camera. visual field image.
  • radar includes lidar and millimeter wave radar, and cameras include monocular cameras and binocular cameras.
  • the parking space selection instruction includes the first position information of the target parking space to be parked in the virtual parking image.
  • S130 According to the correspondence between the virtual parking image and the real parking image, determine the second position information of the target parking space in the real parking image.
  • the depth information contained in the image corresponds to the object features. For example, for the same parking space, the distance from your car to the parking space can be calculated using the depth in the virtual parking image. Information and depth information representation in real parking images. Therefore, the second position information of the target parking space in the real parking image can be determined based on the correspondence between the virtual parking image and the real parking image.
  • the second position information may be determined through distance (or depth) matching. First obtain the target radar point cloud corresponding to the first position information, and obtain the depth map corresponding to the real-scene parking image; then find the target depth information corresponding to the target radar point cloud from the depth map; finally determine the position information at the target depth information is the second location information.
  • the depth map includes the depth information from the camera to the target object.
  • the monocular camera can obtain the depth information contained in the real-scene parking image through relative distance movement, and the binocular camera can directly obtain the depth information contained in the real-scene parking image.
  • the available parking spaces in the real-scene parking image can be identified as candidate parking spaces based on a computer vision algorithm;
  • the position information at the target depth information is determined as the second position information.
  • Identifying available parking spaces in real-scene parking images based on computer vision algorithms includes: identifying available parking spaces in real-scene parking images based on a parking space line recognition model.
  • the parking space line recognition model can be based on a neural network to mark a large number of available parking spaces. It is trained on historical real-life parking images.
  • Embodiments of the present application can achieve distance (or depth) matching through the target radar point cloud corresponding to the first position information in the virtual parking image and the depth map corresponding to the real parking image, and obtain the target depth information corresponding to the target radar point cloud, and
  • the position information at the target depth information is determined to be the second position information of the target parking space in the real-scene parking image, This can further ensure that there is a parking space at the determined second location information rather than other locations, such as parked parking spaces, driveways, other open spaces, etc., thus improving the accuracy of determining the second location information.
  • the embodiment of the present application can also determine the first reference object characteristics based on the radar point cloud around the target radar point cloud; based on the computer
  • the visual algorithm identifies the characteristics of the second reference object around each candidate parking space; determines the location information of the target candidate parking space as the second location information, and the target candidate parking space is the second reference that successfully matches the characteristics of the first reference object Candidate parking spaces corresponding to object characteristics.
  • the "surrounding" may be a circular area with the target radar point cloud or the candidate parking space as the center and R as the radius, or it may be an area of other shapes, which is not limited in the embodiments of the present application.
  • Reference object features include features used to describe the size and shape of the reference object. For example, when the depth information of the candidate parking space does not include the target depth information, reference objects such as surrounding pillars can be used for auxiliary matching to further determine the second location information.
  • the second position information when the second position information cannot be accurately obtained based on distance (or depth) matching, the second position information can be determined through reference object feature matching, thereby further improving the accuracy of determining the second position information.
  • S140 Highlight the target parking space in the virtual parking image based on the first location information, and highlight the target parking space in the real parking image based on the second location information.
  • highlighting includes highlighting, changing line colors, adding markers, etc., which can allow the driver to intuitively and quickly see the target parking space.
  • the left side is a virtual parking image
  • the right side is a real parking image.
  • the target parking space is highlighted and bolded, and a parking sign "P" is added to the target parking space. This allows the driver to quickly view the corresponding relationship between the two.
  • the parking interaction method provided by the embodiment of the present application can simultaneously display a virtual parking image and its corresponding real parking image, and when the driver selects the target parking space in the virtual parking image, he can also display the virtual parking space in the virtual parking space.
  • the target parking space is highlighted in both the image and the real-scene parking image, so that the driver can intuitively obtain the corresponding relationship between the target parking space in the virtual parking image and the real-scene parking image, without having to waste energy on converting the virtual parking space into a virtual parking space.
  • the parking image is compared with the real physical world, which not only improves parking efficiency, but also improves the reliability of the automatic parking system.
  • embodiments of the present application can perform the following steps according to the first location information: After the target parking space is highlighted in the virtual parking image, and the target parking space is highlighted in the real parking image based on the second location information, in the parking stage, the destination of the vehicle to the third parking space is highlighted in the real parking image. 2.
  • Planning path of location information The display method for highlighting the planned route may be the same as the method for highlighting the target parking space, or may be different.
  • a white and transparent layer can be used to cover the road surface in the real parking image.
  • embodiments of the present application can determine whether there is a first vehicle component after receiving an instruction to turn on the automatic parking system.
  • the first vehicle component is a vehicle component whose current physical state does not meet the automatic parking conditions; if there is In the case of the first vehicle component, adjust the virtual state of the first vehicle component of the self-vehicle model in the initially generated virtual parking image to the current physical state of the first vehicle component; output the virtual parking image after the display state is adjusted, and highlight the first vehicle component; and output first parking prompt information, which is used to prompt the driver to adjust the current physical state of the first vehicle component to a state that satisfies the automatic parking conditions.
  • the first vehicle component includes a door, a trunk, etc., and the display method of highlighting the first vehicle component and the aforementioned highlighted target parking space may be the same or different.
  • the first parking prompt information can be displayed in the blank position of the virtual parking image in the form of text, or can be output in the form of voice, text message, etc.
  • the front left door of the self-car model in the virtual parking image is open and highlighted, and the virtual parking image also displays the "Please close the door".
  • a parking prompt message through which the driver can intuitively and quickly learn that the left front door needs to be closed before automatic parking can be performed.
  • the parking interaction method provided by the embodiment of the present application can, in addition to providing the driver with the first parking prompt information when the automatic parking conditions are not met, can also add the first parking prompt of the self-car model in the initially generated virtual parking image.
  • the virtual state of a vehicle component is adjusted to the current physical state of the first vehicle component, and when the virtual parking image with the adjusted display state is output, the first vehicle component is highlighted, thereby not only making the self-vehicle model consistent with the real-world vehicle The status remains consistent, giving the driver an immersive experience. It also allows the driver to combine the first parking prompt information with the highlighted first vehicle component, and intuitively and quickly see which first vehicle component does not meet the requirements for automatic parking. Conditions, how to adjust, thereby improving the driver's efficiency in adjusting the first vehicle component, thereby improving parking efficiency.
  • the embodiment of the present application After the target parking space is highlighted in the virtual parking image based on the first location information, and the target parking space is highlighted in the real parking image based on the second location information, the embodiment of the present application is also provided:
  • the virtual state of the second vehicle component of the self-vehicle model in the virtual parking image is updated as the physical state of the second vehicle component changes, and the updated virtual parking is displayed.
  • the image highlight the second vehicle component, which is the vehicle component whose entity state changes visually during the parking process; and/or,
  • the third vehicle component When there is a third vehicle component in the parking stage, when displaying the currently generated virtual parking image, the third vehicle component is highlighted and the second parking prompt message is output.
  • the third vehicle component indicates that a parking operation error has occurred.
  • the vehicle component, the second parking prompt information is used to prompt the driver of parking operation errors.
  • the second vehicle component includes a steering wheel
  • the third vehicle component includes a brake pedal, an accelerator pedal, etc.
  • the second parking prompt information can be displayed in the blank position of the virtual parking image in the form of text, or can be output in the form of voice, text message, etc.
  • the second vehicle part and the third vehicle part may be parts within the vehicle. Therefore, when the second vehicle part or the third vehicle part is highlighted, the displayed self-vehicle model
  • the range may not be a panoramic range, but may include a partial view range of the second vehicle component or the third vehicle component, but the display format may be the same as highlighting the target parking space, for example, highlighting.
  • the self-vehicle model when the second vehicle component or the third vehicle component is an exterior component of the vehicle, the self-vehicle model is a panoramic range, and when the second vehicle component or the third vehicle component is an interior component, the self-vehicle model may include the third vehicle component.
  • the steering wheel in the self-car model will rotate with the rotation of the steering wheel of the physical vehicle, and the steering wheel can also be highlighted, and the local field of view will be displayed when the self-car model is displayed.
  • the brake pedal or accelerator pedal can be highlighted in the self-vehicle model of the virtual parking image and displayed. Text reminders for parking operation errors allow the driver to quickly learn where the operation errors are and correct them in a timely manner.
  • the parking interaction method provided by the embodiment of the present application can make the virtual state of the second vehicle component of the self-vehicle model consistent with the physical state of the real-world vehicle and highlight the second vehicle component during the parking stage, which can make the driver Whether the driver is in the car or not, he can intuitively view the status changes of his own car, thus providing the driver with an immersive parking interactive experience.
  • the driver makes a parking operation error during the parking phase, by highlighting the third vehicle component where the parking operation error occurred on the self-vehicle model and providing the second parking prompt information, the driver can combine the prompt information. and highlighted third vehicle components, intuitively and quickly locating incorrect parking operations and making timely corrections, thereby improving parking efficiency.
  • the output display unit 20 is used to output and display the currently generated virtual parking image and its corresponding real parking image
  • the receiving unit 22 is configured to receive a parking space selection instruction for the virtual parking image, where the parking space selection instruction includes the first position information of the target parking space to be parked in the virtual parking image;
  • the determination unit 24 is configured to determine the second position information of the target parking space in the real parking image according to the corresponding relationship between the virtual parking image and the real parking image;
  • the determining unit 24 includes:
  • the search module is used to find the target depth information corresponding to the target radar point cloud from the depth map;
  • the first determination module is used to determine the position information at the target depth information as the second position information.
  • the determining unit 24 further includes:
  • the first determination module is configured to determine the position information at the target depth information as the second position information when the depth information of the candidate parking space contains the target depth information.
  • the identification module is also used to identify the second reference object characteristics around each candidate parking space based on a computer vision algorithm
  • the output display unit 20 includes:
  • the device also includes:
  • the device also includes:
  • the second output prompt unit is used to output second parking prompt information when the third vehicle component is highlighted, and the second parking prompt information is used to prompt the driver of parking operation errors.
  • the output display unit 20 includes:
  • the first output display module is used to output and display the currently generated virtual parking image after starting the automatic parking system
  • the second output display module is used to output and display the currently generated real-scene parking image on the basis of outputting and displaying the currently generated virtual parking image when the library search instruction is received or the gear is switched to the R gear.
  • the real-scene parking image is located on an upper layer of the virtual parking image, and the display positions of the real-scene parking image and the virtual parking image do not overlap.
  • another embodiment of the present application provides a storage medium on which executable instructions are stored. When executed by a processor, the instructions cause the processor to implement the method described in any of the above embodiments.
  • an electronic device or computer device including:
  • a storage device for storing one or more programs
  • the electronic device or computer device is caused to implement the method described in any of the above embodiments.
  • another embodiment of the present application provides a vehicle, which includes the device as described in any of the above embodiments, or includes the electronic device as described above.
  • the vehicle can also include: GPS (Global Positioning System, Global Positioning System) positioning device, T-Box (TelematicsBox, telematics processor), and V2X (Vehicle-to-Everything, Internet of Vehicles) modules.
  • GPS Global Positioning System, Global Positioning System
  • T-Box TelematicsBox, telematics processor
  • V2X Vehicle-to-Everything, Internet of Vehicles
  • the GPS positioning device is used to obtain the current geographical location of the vehicle
  • the T-Box can be used as a gateway to communicate with external devices
  • the V2X module is used to communicate with other vehicles, roadside equipment, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Physics & Mathematics (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Traffic Control Systems (AREA)
  • Navigation (AREA)

Abstract

本申请公开一种泊车交互方法、装置、存储介质、电子设备及车辆,涉及汽车技术领域。方法包括:输出显示当前生成的虚拟泊车图像及其对应的实景泊车图像;接收针对虚拟泊车图像的泊车车位选择指令,泊车车位选择指令包括待泊入的目标可泊车位在虚拟泊车图像中的第一位置信息;根据虚拟泊车图像和实景泊车图像的对应关系,确定目标可泊车位在实景泊车图像中的第二位置信息;根据第一位置信息在虚拟泊车图像中突出显示目标可泊车位,根据第二位置信息在实景泊车图像中突出显示目标可泊车位。本申请可以使得驾驶员直观获取到目标可泊车位在虚拟泊车图像和实景泊车图像中对应关系,无需再耗费精力做比对,从而提高了泊入效率和自动泊车系统的可靠性。

Description

泊车交互方法、装置、存储介质、电子设备及车辆 技术领域
本申请涉及汽车技术领域,具体而言,涉及一种泊车交互方法、装置、存储介质、电子设备及车辆。
背景技术
自动泊车系统是高级驾驶辅助系统或者自动驾驶系统的重要组成部分。自动泊车系统开启后,可以利用传感器识别自车周围障碍物及空余车位(也可称为可泊车位),并生成包含自车、周围车辆及空余车位的虚拟泊车图像,即VR(Virtual Reality,虚拟现实)图像。驾驶员可以从VR图像中点击选择想要泊入的目标可泊车位,并根据目标可泊车位进行泊车。然而无论是自动泊入还是手动泊入,驾驶员均需要结合标识目标可泊车位的VR图像和现实物理世界做比对,以确定VR图像中的目标可泊车位在显示物理世界中的位置,才可放心且安全地泊入目标可泊车位。由此,对于手动泊入而言,由于VR图像和现实物理世界在视觉呈现上存在差异,所以将两者做比对时可能会消耗更多时间,从而降低泊入效率;对于自动泊入而言,这种视觉呈现上存在的差异,可能会给驾驶员带来比对焦虑,从而缺乏安全感,迫使驾驶员由自动泊入改为手动泊入,使得自动泊车系统的可靠性降低。
发明内容
本申请提供了一种泊车交互方法、装置、存储介质、电子设备及车辆,能够解决如下技术问题:在泊车过程中驾驶员需要结合标识目标可泊车位的VR图像和现实物理世界做比对,以确定VR图像中的目标可泊车位在显示物理世界中的位置,才可放心且安全地泊入目标可泊车位,但是由于VR图像和现实物理世界在视觉呈现上存在差异,所以将两者做比对会消耗更多时间并给驾驶员带来比对焦虑,从而导致泊入效率降低以及自动泊车系统的可靠性降低等问题。
具体的技术方案如下:
第一方面,本申请实施例提供了一种泊车交互方法,所述方法包括:
输出显示当前生成的虚拟泊车图像及其对应的实景泊车图像;
接收针对虚拟泊车图像的泊车车位选择指令,泊车车位选择指令包括待泊入的目标 可泊车位在虚拟泊车图像中的第一位置信息;
根据虚拟泊车图像和实景泊车图像的对应关系,确定目标可泊车位在实景泊车图像中的第二位置信息;
根据第一位置信息在虚拟泊车图像中突出显示目标可泊车位,以及根据第二位置信息在实景泊车图像中突出显示目标可泊车位。
通过上述方案可知,本申请实施例能够同时显示虚拟泊车图像及其对应的实景泊车图像,并当驾驶员在虚拟泊车图像中选择目标可泊车位后,还可以在虚拟泊车图像和实景泊车图像中均突出显示出该目标可泊车位,从而可以使得驾驶员直观获取到目标可泊车位在虚拟泊车图像和实景泊车图像中对应关系,而无需再耗费精力将虚拟泊车图像与现实物理世界做比对,从而不仅提高了泊入效率,还提高了自动泊车系统的可靠性。
在第一方面的第一种可能的实现方式中,根据虚拟泊车图像和实景泊车图像的对应关系,确定目标可泊车位在实景泊车图像中的第二位置信息,包括:
获取第一位置信息对应的目标雷达点云,以及获取实景泊车图像对应的深度图;
从深度图中查找目标雷达点云对应的目标深度信息;
将目标深度信息处的位置信息确定为第二位置信息。
在第一方面的第二种可能的实现方式中,在将目标深度信息处的位置信息确定为第二位置信息之前,方法还包括:
基于计算机视觉算法识别实景泊车图像中的可泊车位作为候选可泊车位;
将目标深度信息处的位置信息确定为第二位置信息包括:
在候选可泊车位的深度信息中包含目标深度信息的情况下,将目标深度信息处的位置信息确定为第二位置信息。
通过上述方案可知,本申请实施例可以通过虚拟泊车图像中第一位置信息对应的目标雷达点云和实景泊车图像对应的深度图实现距离(或者深度)匹配,获得目标雷达点云对应的目标深度信息,并在确定该目标深度信息为实景泊车图像中某一候选可泊车位的深度信息时,才将该目标深度信息处的位置信息确定为实景泊车图像中目标可泊车位的第二位置信息,从而可以进一步确保所确定的第二位置信息处存在可泊车位,而非其他位置,例如,已停车的车位、车道、其他空地等,进而提高了确定第二位置信息的准确率。
在第一方面的第三种可能的实现方式中,方法还包括:
在候选可泊车位的深度信息中不包含目标深度信息的情况下,根据目标雷达点云周围的雷达点云确定第一参考物特征;
基于计算机视觉算法识别各个候选可泊车位周围的第二参考物特征;
将目标候选可泊车位的位置信息确定为第二位置信息,目标候选可泊车位为与第一参考物特征匹配成功的第二参考物特征对应的候选可泊车位。
通过上述方案可知,本申请实施例在根据距离(或者深度)匹配无法准确获得第二位置信息的情况下,可以通过参考物特征匹配确定第二位置信息,从而进一步提高了确定第二位置信息的准确率。
在第一方面的第四种可能的实现方式中,在根据第一位置信息在虚拟泊车图像中突出显示目标可泊车位,以及根据第二位置信息在实景泊车图像中突出显示目标可泊车位之后,方法还包括:
在泊入阶段,在实景泊车图像中突出显示自车到第二位置信息的规划路径。
在第一方面的第五种可能的实现方式中,输出显示当前生成的虚拟泊车图像包括:
在接收到自动泊车系统开启指令后,判断是否存在第一车辆部件,第一车辆部件为当前实体状态不满足自动泊车条件的车辆部件;
在存在第一车辆部件的情况下,将初始生成的虚拟泊车图像中自车模型的第一车辆部件的虚拟状态调整为第一车辆部件的当前实体状态;
输出显示状态调整后的虚拟泊车图像,并突出显示第一车辆部件;
方法还包括:
输出第一泊车提示信息,第一泊车提示信息用于提示驾驶员将第一车辆部件的当前实体状态调整为满足自动泊车条件时的状态。
通过上述方案可知,当不满足自动泊车条件时,除了向驾驶员提供第一泊车提示信息外,还可以将初始生成的虚拟泊车图像中自车模型的第一车辆部件的虚拟状态调整为第一车辆部件的当前实体状态,并在输出显示状态调整后的虚拟泊车图像时,突出显示第一车辆部件,从而不仅可以使得自车模型与现实世界的车辆的状态保持一致,给予驾驶员沉浸式体验,还可以使得驾驶员将第一泊车提示信息和突出显示的第一车辆部件相结合,直观、快速查看到哪个第一车辆部件不满足自动泊车条件,需要如何调整,进而提高了驾驶员调整第一车辆部件的效率,由此提高了泊车效率。
在第一方面的第六种可能的实现方式中,在根据第一位置信息在虚拟泊车图像中突出显示目标可泊车位,以及根据第二位置信息在实景泊车图像中突出显示目标可泊车位之后,方法还包括:
在泊入阶段存在第二车辆部件的情况下,随着第二车辆部件实体状态的变化更新虚拟泊车图像中自车模型的第二车辆部件的虚拟状态,并在显示更新后的虚拟泊车图像时,突出显示第二车辆部件,第二车辆部件为泊车过程中实体状态发生可视化动态变化的车辆部件;和/或,
在泊入阶段存在第三车辆部件的情况下,在显示当前生成的虚拟泊车图像时,突出显示第三车辆部件,并输出第二泊车提示信息,第三车辆部件为发生泊车操作错误的车辆部件,第二泊车提示信息用于提示驾驶员泊车操作错误。
通过上述方案可知,在泊入阶段,通过使得自车模型第二车辆部件的虚拟状态与现实世界车辆的实体状态保持一致,并突出显示第二车辆部件,可以使得驾驶员无论是否在车内,均可直观查看自车的状态变化,从而向驾驶员提供了一种沉浸式的泊车交互体验。而当驾驶员在泊入阶段发生泊车操作错误时,通过在自车模型上突出显示发生泊车操作错误的第三车辆部件,并提供第二泊车提示信息,可以使得驾驶员结合提示信息和突出显示的第三车辆部件,直观快速定位出错误的泊车操作,并进行及时更正,从而提高了泊入效率。
在第一方面的第七种可能的实现方式中,输出显示当前生成的虚拟泊车图像及其对应的实景泊车图像,包括:
在启动自动泊车系统后,输出显示当前生成的虚拟泊车图像;
当接收到寻库指令或者档位切换至R档时,在输出显示当前生成的虚拟泊车图像的基础上,输出显示当前生成的实景泊车图像。
在第一方面的第八种可能的实现方式中,实景泊车图像位于虚拟泊车图像的上层,且实景泊车图像与虚拟泊车图像的显示位置不重叠。
第二方面,本申请实施例提供了一种泊车交互装置,装置包括:
输出显示单元,用于输出显示当前生成的虚拟泊车图像及其对应的实景泊车图像;
接收单元,用于接收针对虚拟泊车图像的泊车车位选择指令,泊车车位选择指令包括待泊入的目标可泊车位在虚拟泊车图像中的第一位置信息;
确定单元,用于根据虚拟泊车图像和实景泊车图像的对应关系,确定目标可泊车位在实景泊车图像中的第二位置信息;
突出显示单元,用于根据第一位置信息在虚拟泊车图像中突出显示目标可泊车位,以及根据第二位置信息在实景泊车图像中突出显示目标可泊车位。
通过上述方案可知,本申请实施例能够同时显示虚拟泊车图像及其对应的实景泊车图像,并当驾驶员在虚拟泊车图像中选择目标可泊车位后,还可以在虚拟泊车图像和实景泊车图像中均突出显示出该目标可泊车位,从而可以使得驾驶员直观获取到目标可泊车位在虚拟泊车图像和实景泊车图像中对应关系,而无需再耗费精力将虚拟泊车图像与现实物理世界做比对,从而不仅提高了泊入效率,还提高了自动泊车系统的可靠性。
在第二方面的第一种可能的实现方式中,确定单元,包括:
获取模块,用于获取第一位置信息对应的目标雷达点云,以及获取实景泊车图像对 应的深度图;
查找模块,用于从深度图中查找目标雷达点云对应的目标深度信息;
第一确定模块,用于将目标深度信息处的位置信息确定为第二位置信息。
在第二方面的第二种可能的实现方式中,确定单元还包括:
识别模块,用于在将目标深度信息处的位置信息确定为第二位置信息之前,基于计算机视觉算法识别实景泊车图像中的可泊车位作为候选可泊车位;
第一确定模块,用于在候选可泊车位的深度信息中包含目标深度信息的情况下,将目标深度信息处的位置信息确定为第二位置信息。
在第二方面的第三种可能的实现方式中,确定单元还包括:
第二确定模块,用于在候选可泊车位的深度信息中不包含目标深度信息的情况下,根据目标雷达点云周围的雷达点云确定第一参考物特征;
识别模块,还用于基于计算机视觉算法识别各个候选可泊车位周围的第二参考物特征;
第一确定模块,还用于将目标候选可泊车位的位置信息确定为第二位置信息,目标候选可泊车位为与第一参考物特征匹配成功的第二参考物特征对应的候选可泊车位。
在第二方面的第四种可能的实现方式中,突出显示单元,还用于在根据第一位置信息在虚拟泊车图像中突出显示目标可泊车位,以及根据第二位置信息在实景泊车图像中突出显示目标可泊车位之后,在泊入阶段,在实景泊车图像中突出显示自车到第二位置信息的规划路径。
在第二方面的第五种可能的实现方式中,输出显示单元,用于在接收到自动泊车系统开启指令后,判断是否存在第一车辆部件,第一车辆部件为当前实体状态不满足自动泊车条件的车辆部件;在存在第一车辆部件的情况下,将初始生成的虚拟泊车图像中自车模型的第一车辆部件的虚拟状态调整为第一车辆部件的当前实体状态;输出显示状态调整后的虚拟泊车图像,并突出显示第一车辆部件;
装置还包括:
第一输出提示单元,用于输出第一泊车提示信息,第一泊车提示信息用于提示驾驶员将第一车辆部件的当前实体状态调整为满足自动泊车条件时的状态。
在第二方面的第六种可能的实现方式中,突出显示单元,还用于在根据第一位置信息在虚拟泊车图像中突出显示目标可泊车位,以及根据第二位置信息在实景泊车图像中突出显示目标可泊车位之后,在泊入阶段存在第二车辆部件的情况下,随着第二车辆部件实体状态的变化更新虚拟泊车图像中自车模型的第二车辆部件的虚拟状态,并在显示更新后的虚拟泊车图像时,突出显示第二车辆部件,第二车辆部件为泊车过程中实体状 态发生可视化动态变化的车辆部件;和/或,在泊入阶段存在第三车辆部件的情况下,在显示当前生成的虚拟泊车图像时,突出显示第三车辆部件,第三车辆部件为发生泊车操作错误的车辆部件;
装置还包括:
第二输出提示单元,用于在突出显示第三车辆部件时,输出第二泊车提示信息,第二泊车提示信息用于提示驾驶员泊车操作错误。
在第二方面的第七种可能的实现方式中,输出显示单元,包括:
第一输出显示模块,用于在启动自动泊车系统后,输出显示当前生成的虚拟泊车图像;
第二输出显示模块,用于当接收到寻库指令或者档位切换至R档时,在输出显示当前生成的虚拟泊车图像的基础上,输出显示当前生成的实景泊车图像。
在第二方面的第八种可能的实现方式中,实景泊车图像位于虚拟泊车图像的上层,且实景泊车图像与虚拟泊车图像的显示位置不重叠。
第三方面,本申请实施例提供了一种存储介质,其上存储有计算机程序,该程序被处理器执行时实现如第一方面任一可能的实现方式所述的方法。
第四方面,本申请实施例提供了一种电子设备,电子设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序,
当一个或多个程序被一个或多个处理器执行,使得电子设备实现如第一方面任一可能的实现方式所述的方法。
第五方面,本申请实施例提供了一种车辆,车辆包含如第二方面任一可能的实现方式所述的装置,或者包含如第四方面所述的电子设备。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单介绍。显而易见地,下面描述中的附图仅仅是本申请的一些实施例。对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的一种泊车交互方法的流程示意图;
图2为本申请实施例提供的一种泊车交互的图像示例图;
图3为本申请实施例提供的另一种泊车交互的图像示例图;
图4为本申请实施例提供的一种虚拟泊车图像中自车模型的示例图;
图5为本申请实施例提供的一种泊车交互装置的组成框图;
图6为本申请实施例提供的一种车辆的组成框图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整的描述。显然,所描述的实施例仅仅是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有付出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互组合。本申请实施例及附图中的术语“包括”和“具有”以及它们的任何变形,意图在于覆盖不排他的包含。例如包含的一系列步骤或单元的过程、方法、系统、产品或设备没有限定于已列出的步骤或单元,而是可选地还包括没有列出的步骤或单元,或可选地还包括对于这些过程、方法、产品或设备固有的其他步骤或单元。
图1为一种泊车交互方法的流程示意图,该方法可以应用于电子设备或计算机设备,具体可以应用于车辆或者与车辆交互的其他电子设备(如移动终端),该方法可以包括如下步骤:
S110:输出显示当前生成的虚拟泊车图像及其对应的实景泊车图像。
虚拟泊车图像是利用雷达识别自车周围障碍物及可泊车位,并生成包含自车、已占车位及可泊车位的VR图像,实景泊车图像是摄像头拍摄的自车前方(或者后方)视野图像。其中,雷达包括激光雷达、毫米波雷达,摄像头包括单目摄像头、双目摄像头。
当本申请实施例应用于车辆时,可以由车辆生成虚拟泊车图像及其对应的实景泊车图像,并输出显示在车载中控屏,也可以基于HUD(Head Up Display,抬头显示)输出显示在前挡风玻璃上。当本申请实施应用于与车辆交互的其他电子设备(如移动终端),可以由车辆生成虚拟泊车图像及其对应的实景泊车图像之后,将虚拟泊车图像和实景泊车图像发送给该其他电子设备,由该其他电子设备输出显示虚拟泊车图像和实景泊车图像。
虚拟泊车图像和实景泊车图像可以上下排列,也可以左右排列,实景泊车图像可以位于虚拟泊车图像的上层,也可以位于虚拟泊车图像的下层,还可以与虚拟泊车图像同层,主要保证实景泊车图像与虚拟泊车图像的显示位置不重叠即可。其中,上层、下层和同层是指图层。
在一种实施方式中,由于在开启自动泊车系统后,在寻库或者倒车时,驾驶员将虚拟泊车图像与现实世界进行比对的难度和需求会加大,所以为了在提高驾驶员泊车体验、 提高泊入效率的基础上,节省资源,可以启动自动泊车系统后,输出显示当前生成的虚拟泊车图像;当接收到寻库指令或者档位切换至R档时,在输出显示当前生成的虚拟泊车图像的基础上,输出显示当前生成的实景泊车图像。在这种情况下,可以将实景泊车图像直接输出在虚拟泊车图像的上层,提高双图像的输出效率。
S120:接收针对虚拟泊车图像的泊车车位选择指令,泊车车位选择指令包括待泊入的目标可泊车位在虚拟泊车图像中的第一位置信息。
在实际应用中,可以将虚拟泊车图像中每一个可泊车位设置成可点击的控件供用户点击选择,也可以在每个可泊车位的上层设置一个按钮供用户点击选择,还可以为每个可泊车位进行编号,供用户使用显示装置自带的输入按键输入编号进行选择。无论是哪种人机交互方式,控件或者编号都可以与对应可泊车位的位置信息进行绑定。当车辆接收到用户的泊车车位选择指令后,便可以获知待泊入的目标可泊车位在虚拟泊车图像中的第一位置信息。
S130:根据虚拟泊车图像和实景泊车图像的对应关系,确定目标可泊车位在实景泊车图像中的第二位置信息。
无论是虚拟泊车图像还是实景泊车图像,图像中包含的深度信息和物体特征是相对应的,例如,针对同一个车位,自车到该车位的距离分别可以用虚拟泊车图像中的深度信息和实景泊车图像中的深度信息表示。因此,可以根据虚拟泊车图像和实景泊车图像的对应关系,确定目标可泊车位在实景泊车图像中的第二位置信息。
具体的,可以通过距离(或者深度)匹配的方法确定第二位置信息。先获取第一位置信息对应的目标雷达点云,以及获取实景泊车图像对应的深度图;再从深度图中查找目标雷达点云对应的目标深度信息;最后将目标深度信息处的位置信息确定为第二位置信息。其中,深度图包括摄像头到目标对象的深度信息,单目摄像头可通过相对距离移动获取到实景泊车图像包含的深度信息,双目摄像头可直接获取到实景泊车图像包含的深度信息。
在一种实施方式中,在将目标深度信息处的位置信息确定为第二位置信息之前,可以先基于计算机视觉算法识别实景泊车图像中的可泊车位作为候选可泊车位;在候选可泊车位的深度信息中包含目标深度信息的情况下,再将目标深度信息处的位置信息确定为第二位置信息。
基于计算机视觉算法识别实景泊车图像中的可泊车位包括:基于车位线识别模型识别实景泊车图像中的可泊车位,车位线识别模型可以基于神经网络对标记出可泊车位车位线的大量历史实景泊车图像进行训练而得。
本申请实施例可以通过虚拟泊车图像中第一位置信息对应的目标雷达点云和实景泊 车图像对应的深度图实现距离(或者深度)匹配,获得目标雷达点云对应的目标深度信息,并在确定该目标深度信息为实景泊车图像中某一候选可泊车位的深度信息时,才将该目标深度信息处的位置信息确定为实景泊车图像中目标可泊车位的第二位置信息,从而可以进一步确保所确定的第二位置信息处存在可泊车位,而非其他位置,例如,已停车的车位、车道、其他空地等,进而提高了确定第二位置信息的准确率。
在一种实施方式中,在候选可泊车位的深度信息中不包含目标深度信息的情况下,本申请实施例还可以根据目标雷达点云周围的雷达点云确定第一参考物特征;基于计算机视觉算法识别各个候选可泊车位周围的第二参考物特征;将目标候选可泊车位的位置信息确定为第二位置信息,目标候选可泊车位为与第一参考物特征匹配成功的第二参考物特征对应的候选可泊车位。其中,“周围”可以为以目标雷达点云或者候选可泊车位为中心、以R为半径的圆形区域,也可以为其他形状的区域,本申请实施例对此不作限定。参考物特征包括用于描述参考物大小、形状的特征。例如,当候选可泊车位的深度信息中不包含目标深度信息时,可以通过周围的柱子等参考物进行辅助匹配,进一步确定第二位置信息。
本申请实施例在根据距离(或者深度)匹配无法准确获得第二位置信息的情况下,可以通过参考物特征匹配确定第二位置信息,从而进一步提高了确定第二位置信息的准确率。
S140:根据第一位置信息在虚拟泊车图像中突出显示目标可泊车位,以及根据第二位置信息在实景泊车图像中突出显示目标可泊车位。
其中,突出显示包括高亮、改变线条颜色、增加标记等能够让驾驶员直观、快速看到目标可泊车位的显示方式。如图2所示,左侧为虚拟泊车图像,右侧为实景泊车图像,并分别高亮加粗显示了目标可泊车位,且在目标可泊车位增加了停车标识“P”,从而使得驾驶员可以快速查看到两者的对应关系。
本申请实施例提供的泊车交互方法,能够同时显示虚拟泊车图像及其对应的实景泊车图像,并当驾驶员在虚拟泊车图像中选择目标可泊车位后,还可以在虚拟泊车图像和实景泊车图像中均突出显示出该目标可泊车位,从而可以使得驾驶员直观获取到目标可泊车位在虚拟泊车图像和实景泊车图像中对应关系,而无需再耗费精力将虚拟泊车图像与现实物理世界做比对,从而不仅提高了泊入效率,还提高了自动泊车系统的可靠性。
在一种实施方式中,为了让驾驶员提前并直观获知自车到目标可泊车位的现实行驶路径,从而进一步增强驾驶员泊车的沉浸式体验,本申请实施例可以在根据第一位置信息在虚拟泊车图像中突出显示目标可泊车位,以及根据第二位置信息在实景泊车图像中突出显示目标可泊车位之后,在泊入阶段,在实景泊车图像中突出显示自车到第二位置 信息的规划路径。其中,突出显示该规划路径的显示方法可以与突出显示目标可泊车位相同,也可以不同。为了不影响驾驶员的观感,可以以一种白色偏透明的图层覆盖在实景泊车图像中的路面上。
在一种实施方式中,当驾驶员开启自动泊车系统时,可能因为车门没关、后备箱没关等原因,导致后续无法成功、安全地进行泊车操作。为了解决该技术问题,本申请实施例可以在接收到自动泊车系统开启指令后,判断是否存在第一车辆部件,第一车辆部件为当前实体状态不满足自动泊车条件的车辆部件;在存在第一车辆部件的情况下,将初始生成的虚拟泊车图像中自车模型的第一车辆部件的虚拟状态调整为第一车辆部件的当前实体状态;输出显示状态调整后的虚拟泊车图像,并突出显示第一车辆部件;并输出第一泊车提示信息,第一泊车提示信息用于提示驾驶员将第一车辆部件的当前实体状态调整为满足自动泊车条件时的状态。
其中,第一车辆部件包括车门、后备箱等,突出显示第一车辆部件与前述突出显示目标可泊车位的显示方式,可以相同,也可以不同。第一泊车提示信息可以以文字形式显示在虚拟泊车图像的空白位置,也可以以语音、短信等形式输出。
示例性的,如图3所示,虚拟泊车图像中的自车模型前左侧车门处于打开状态,并且进行了高亮显示,且虚拟泊车图像中还显示了“请关闭车门”的第一泊车提示信息,由此驾驶员可以直观、快速获知,需要将左前侧车门关闭才可以进行自动泊车。
本申请实施例提供的泊车交互方法,能够当不满足自动泊车条件时,除了向驾驶员提供第一泊车提示信息外,还可以将初始生成的虚拟泊车图像中自车模型的第一车辆部件的虚拟状态调整为第一车辆部件的当前实体状态,并在输出显示状态调整后的虚拟泊车图像时,突出显示第一车辆部件,从而不仅可以使得自车模型与现实世界的车辆的状态保持一致,给予驾驶员沉浸式体验,还可以使得驾驶员将第一泊车提示信息和突出显示的第一车辆部件相结合,直观、快速查看到哪个第一车辆部件不满足自动泊车条件,需要如何调整,进而提高了驾驶员调整第一车辆部件的效率,由此提高了泊车效率。
在一种实施方式中,在根据第一位置信息在虚拟泊车图像中突出显示目标可泊车位,以及根据第二位置信息在实景泊车图像中突出显示目标可泊车位之后,本申请实施例还提供了如下方法:
在泊入阶段存在第二车辆部件的情况下,随着第二车辆部件实体状态的变化更新虚拟泊车图像中自车模型的第二车辆部件的虚拟状态,并在显示更新后的虚拟泊车图像时,突出显示第二车辆部件,第二车辆部件为泊车过程中实体状态发生可视化动态变化的车辆部件;和/或,
在泊入阶段存在第三车辆部件的情况下,在显示当前生成的虚拟泊车图像时,突出 显示第三车辆部件,并输出第二泊车提示信息,第三车辆部件为发生泊车操作错误的车辆部件,第二泊车提示信息用于提示驾驶员泊车操作错误。
其中,第二车辆部件包括方向盘,第三车辆部件包括制动踏板、加速踏板等。第二泊车提示信息可以以文字形式显示在虚拟泊车图像的空白位置,也可以以语音、短信等形式输出。与突出显示目标可泊车位等有所不同的是,第二车辆部件和第三车辆部件可能是车辆内的部件,所以突出显示第二车辆部件或者第三车辆部件时,所显示的自车模型范围可能不是全景范围,而是包括第二车辆部件或者第三车辆部件的局部视野范围,但是显示格式可以与突出显示目标可泊车位等相同,例如高亮显示。也就是说,当第二车辆部件或者第三车辆部件为车外表部件时,自车模型是全景范围,当第二车辆部件或者第三车辆部件为车内部件时,自车模型可以为包含第二车辆部件或者第三车辆部件的局部视野范围。
例如,如图4所示,在泊入阶段,自车模型中的方向盘会随着实体车辆的方向盘的转动而转动,并且还可以高亮显示方向盘,显示自车模型时显示局部视野范围。又如,在泊入阶段,还需要驾驶员配合踩制动踏板或加速踏板时,驾驶员操作错误,可以在虚拟泊车图像的自车模型中突出显示踩制动踏板或加速踏板,并显示泊车操作错误的文字提醒,从而可以让驾驶员快速获知哪里操作错误,并及时改正。
本申请实施例提供的泊车交互方法,能够在泊入阶段,通过使得自车模型第二车辆部件的虚拟状态与现实世界车辆的实体状态保持一致,并突出显示第二车辆部件,可以使得驾驶员无论是否在车内,均可直观查看自车的状态变化,从而向驾驶员提供了一种沉浸式的泊车交互体验。而当驾驶员在泊入阶段发生泊车操作错误时,通过在自车模型上突出显示发生泊车操作错误的第三车辆部件,并提供第二泊车提示信息,可以使得驾驶员结合提示信息和突出显示的第三车辆部件,直观快速定位出错误的泊车操作,并进行及时更正,从而提高了泊入效率。
相应于上述方法实施例,本申请的另一个实施例提供了一种泊车交互装置,如图5所示,该装置包括:
输出显示单元20,用于输出显示当前生成的虚拟泊车图像及其对应的实景泊车图像;
接收单元22,用于接收针对虚拟泊车图像的泊车车位选择指令,泊车车位选择指令包括待泊入的目标可泊车位在虚拟泊车图像中的第一位置信息;
确定单元24,用于根据虚拟泊车图像和实景泊车图像的对应关系,确定目标可泊车位在实景泊车图像中的第二位置信息;
突出显示单元26,用于根据第一位置信息在虚拟泊车图像中突出显示目标可泊车位,以及根据第二位置信息在实景泊车图像中突出显示目标可泊车位。
在一种实施方式中,确定单元24,包括:
获取模块,用于获取第一位置信息对应的目标雷达点云,以及获取实景泊车图像对应的深度图;
查找模块,用于从深度图中查找目标雷达点云对应的目标深度信息;
第一确定模块,用于将目标深度信息处的位置信息确定为第二位置信息。
在一种实施方式中,确定单元24还包括:
识别模块,用于在将目标深度信息处的位置信息确定为第二位置信息之前,基于计算机视觉算法识别实景泊车图像中的可泊车位作为候选可泊车位;
第一确定模块,用于在候选可泊车位的深度信息中包含目标深度信息的情况下,将目标深度信息处的位置信息确定为第二位置信息。
在一种实施方式中,确定单元24还包括:
第二确定模块,用于在候选可泊车位的深度信息中不包含目标深度信息的情况下,根据目标雷达点云周围的雷达点云确定第一参考物特征;
识别模块,还用于基于计算机视觉算法识别各个候选可泊车位周围的第二参考物特征;
第一确定模块,还用于将目标候选可泊车位的位置信息确定为第二位置信息,目标候选可泊车位为与第一参考物特征匹配成功的第二参考物特征对应的候选可泊车位。
在一种实施方式中,突出显示单元26,还用于在根据第一位置信息在虚拟泊车图像中突出显示目标可泊车位,以及根据第二位置信息在实景泊车图像中突出显示目标可泊车位之后,在泊入阶段,在实景泊车图像中突出显示自车到第二位置信息的规划路径。
在一种实施方式中,输出显示单元20,包括:
在接收到自动泊车系统开启指令后,判断是否存在第一车辆部件,第一车辆部件为当前实体状态不满足自动泊车条件的车辆部件;
在存在第一车辆部件的情况下,将初始生成的虚拟泊车图像中自车模型的第一车辆部件的虚拟状态调整为第一车辆部件的当前实体状态;
输出显示状态调整后的虚拟泊车图像,并突出显示第一车辆部件;
该装置还包括:
第一输出提示单元,用于输出第一泊车提示信息,第一泊车提示信息用于提示驾驶员将第一车辆部件的当前实体状态调整为满足自动泊车条件时的状态。
在一种实施方式中,突出显示单元26,还用于在根据第一位置信息在虚拟泊车图像中突出显示目标可泊车位,以及根据第二位置信息在实景泊车图像中突出显示目标可泊车位之后,在泊入阶段存在第二车辆部件的情况下,随着第二车辆部件实体状态的变化 更新虚拟泊车图像中自车模型的第二车辆部件的虚拟状态,并在显示更新后的虚拟泊车图像时,突出显示第二车辆部件,第二车辆部件为泊车过程中实体状态发生可视化动态变化的车辆部件;和/或,在泊入阶段存在第三车辆部件的情况下,在显示当前生成的虚拟泊车图像时,突出显示第三车辆部件,第三车辆部件为发生泊车操作错误的车辆部件;
该装置还包括:
第二输出提示单元,用于在突出显示第三车辆部件时,输出第二泊车提示信息,第二泊车提示信息用于提示驾驶员泊车操作错误。
在一种实施方式中,输出显示单元20,包括:
第一输出显示模块,用于在启动自动泊车系统后,输出显示当前生成的虚拟泊车图像;
第二输出显示模块,用于当接收到寻库指令或者档位切换至R档时,在输出显示当前生成的虚拟泊车图像的基础上,输出显示当前生成的实景泊车图像。
在一种实施方式中,实景泊车图像位于虚拟泊车图像的上层,且实景泊车图像与虚拟泊车图像的显示位置不重叠。
本申请实施例提供的泊车交互装置,能够同时显示虚拟泊车图像及其对应的实景泊车图像,并当驾驶员在虚拟泊车图像中选择目标可泊车位后,还可以在虚拟泊车图像和实景泊车图像中均突出显示出该目标可泊车位,从而可以使得驾驶员直观获取到目标可泊车位在虚拟泊车图像和实景泊车图像中对应关系,而无需再耗费精力将虚拟泊车图像与现实物理世界做比对,从而不仅提高了泊入效率,还提高了自动泊车系统的可靠性。
基于上述方法实施例,本申请的另一实施例提供了一种存储介质,其上存储有可执行指令,该指令被处理器执行时使处理器实现如上任一实施方式所述的方法。
基于上述方法实施例,本申请的另一实施例提供了一种电子设备或计算机设备,包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序,
其中,当所述一个或多个程序被所述一个或多个处理器执行时,使得电子设备或计算机设备实现如上任一实施方式所述的方法。
基于上述方法实施例,本申请的另一实施例提供了车辆,该车辆包含如上任一实施方式所述的装置,或者包含如上所述的电子设备。
如图6所示,车辆包括显示装置30、ECU(Electronic Control Unit,电子控制单元)32、雷达34和摄像头36。雷达34用于生成雷达点云,摄像头36用于采集实景泊车图像,ECU32用于根据雷达点云生成虚拟泊车图像,并将虚拟泊车图像和实景泊车图像传输给显示装置30进行输出显示。显示装置30用于接收针对虚拟泊车图像的泊车车位选择指 令,泊车车位选择指令包括待泊入的目标可泊车位在虚拟泊车图像中的第一位置信息,并将泊车车位选择指令发送给ECU32,由ECU32根据虚拟泊车图像和实景泊车图像的对应关系,确定目标可泊车位在实景泊车图像中的第二位置信息,并将第一位置信息和第二位置信息发送给显示装置30,由显示装置30根据第一位置信息在虚拟泊车图像中突出显示目标可泊车位,以及根据第二位置信息在实景泊车图像中突出显示目标可泊车位。其中,显示装置30包括车载中控屏、基于HUD的前挡风玻璃等。
车辆还可以包括:GPS(Global Positioning System,全球定位系统)定位装置、T-Box(TelematicsBox,远程信息处理器)、V2X(Vehicle-to-Everything,车联网)模块。其中,GPS定位装置用于获取车辆的当前地理位置;T-Box可以作为网关与外部设备进行通信;V2X模块用于与其他车辆、路侧设备等进行通信。
上述装置实施例与方法实施例相对应,与该方法实施例具有同样的技术效果,具体说明参见方法实施例。装置实施例是基于方法实施例得到的,具体的说明可以参见方法实施例部分,此处不再赘述。本领域普通技术人员可以理解:附图只是一个实施例的示意图,附图中的模块或流程并不一定是实施本申请所必须的。
本领域普通技术人员可以理解:实施例中的装置中的模块可以按照实施例描述分布于实施例的装置中,也可以进行相应变化位于不同于本实施例的一个或多个装置中。上述实施例的模块可以合并为一个模块,也可以进一步拆分成多个子模块。
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请实施例技术方案的精神和范围。

Claims (20)

  1. 一种泊车交互方法,其特征在于,所述方法包括:
    输出显示当前生成的虚拟泊车图像及其对应的实景泊车图像;
    接收针对所述虚拟泊车图像的泊车车位选择指令,所述泊车车位选择指令包括待泊入的目标可泊车位在所述虚拟泊车图像中的第一位置信息;
    根据所述虚拟泊车图像和所述实景泊车图像的对应关系,确定所述目标可泊车位在所述实景泊车图像中的第二位置信息;
    根据所述第一位置信息在所述虚拟泊车图像中突出显示所述目标可泊车位,以及根据所述第二位置信息在所述实景泊车图像中突出显示所述目标可泊车位。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述虚拟泊车图像和所述实景泊车图像的对应关系,确定所述目标可泊车位在所述实景泊车图像中的第二位置信息,包括:
    获取所述第一位置信息对应的目标雷达点云,以及获取所述实景泊车图像对应的深度图;
    从所述深度图中查找所述目标雷达点云对应的目标深度信息;
    将所述目标深度信息处的位置信息确定为所述第二位置信息。
  3. 根据权利要求2所述的方法,其特征在于,在将所述目标深度信息处的位置信息确定为所述第二位置信息之前,所述方法还包括:
    基于计算机视觉算法识别所述实景泊车图像中的可泊车位作为候选可泊车位;
    所述将所述目标深度信息处的位置信息确定为所述第二位置信息包括:
    在所述候选可泊车位的深度信息中包含所述目标深度信息的情况下,将所述目标深度信息处的位置信息确定为所述第二位置信息。
  4. 根据权利要求3所述的方法,其特征在于,所述方法还包括:
    在所述候选可泊车位的深度信息中不包含所述目标深度信息的情况下,根据所述目标雷达点云周围的雷达点云确定第一参考物特征;
    基于所述计算机视觉算法识别各个所述候选可泊车位周围的第二参考物特征;
    将目标候选可泊车位的位置信息确定为所述第二位置信息,所述目标候选可泊车位为与所述第一参考物特征匹配成功的所述第二参考物特征对应的所述候选可泊车位。
  5. 根据权利要求1所述的方法,其特征在于,在根据所述第一位置信息在所述虚拟泊车图像中突出显示所述目标可泊车位,以及根据所述第二位置信息在所述实景泊车图 像中突出显示所述目标可泊车位之后,所述方法还包括:
    在泊入阶段,在所述实景泊车图像中突出显示自车到所述第二位置信息的规划路径。
  6. 根据权利要求1所述的方法,其特征在于,所述输出显示当前生成的虚拟泊车图像包括:
    在接收到自动泊车系统开启指令后,判断是否存在第一车辆部件,所述第一车辆部件为当前实体状态不满足自动泊车条件的车辆部件;
    在存在所述第一车辆部件的情况下,将初始生成的虚拟泊车图像中自车模型的第一车辆部件的虚拟状态调整为所述第一车辆部件的所述当前实体状态;
    输出显示状态调整后的所述虚拟泊车图像,并突出显示所述第一车辆部件;
    所述方法还包括:
    输出第一泊车提示信息,所述第一泊车提示信息用于提示驾驶员将所述第一车辆部件的所述当前实体状态调整为满足所述自动泊车条件时的状态。
  7. 根据权利要求1所述的方法,其特征在于,在根据所述第一位置信息在所述虚拟泊车图像中突出显示所述目标可泊车位,以及根据所述第二位置信息在所述实景泊车图像中突出显示所述目标可泊车位之后,所述方法还包括:
    在泊入阶段存在第二车辆部件的情况下,随着所述第二车辆部件实体状态的变化更新所述虚拟泊车图像中自车模型的所述第二车辆部件的虚拟状态,并在显示更新后的所述虚拟泊车图像时,突出显示所述第二车辆部件,所述第二车辆部件为泊车过程中实体状态发生可视化动态变化的车辆部件;和/或,
    在泊入阶段存在第三车辆部件的情况下,在显示当前生成的所述虚拟泊车图像时,突出显示所述第三车辆部件,并输出第二泊车提示信息,所述第三车辆部件为发生泊车操作错误的车辆部件,所述第二泊车提示信息用于提示驾驶员泊车操作错误。
  8. 根据权利要求1所述的方法,其特征在于,所述输出显示当前生成的虚拟泊车图像及其对应的实景泊车图像,包括:
    在启动自动泊车系统后,输出显示当前生成的所述虚拟泊车图像;
    当接收到寻库指令或者档位切换至R档时,在输出显示当前生成的所述虚拟泊车图像的基础上,输出显示当前生成的所述实景泊车图像。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述实景泊车图像位于所述虚拟泊车图像的上层,且所述实景泊车图像与所述虚拟泊车图像的显示位置不重叠。
  10. 一种泊车交互装置,其特征在于,所述装置包括:
    输出显示单元,用于输出显示当前生成的虚拟泊车图像及其对应的实景泊车图像;
    接收单元,用于接收针对所述虚拟泊车图像的泊车车位选择指令,所述泊车车位选择指令包括待泊入的目标可泊车位在所述虚拟泊车图像中的第一位置信息;
    确定单元,用于根据所述虚拟泊车图像和所述实景泊车图像的对应关系,确定所述目标可泊车位在所述实景泊车图像中的第二位置信息;
    突出显示单元,用于根据所述第一位置信息在所述虚拟泊车图像中突出显示所述目标可泊车位,以及根据所述第二位置信息在所述实景泊车图像中突出显示所述目标可泊车位。
  11. 根据权利要求10所述的装置,其特征在于,所述确定单元,包括:
    获取模块,用于获取所述第一位置信息对应的目标雷达点云,以及获取所述实景泊车图像对应的深度图;
    查找模块,用于从所述深度图中查找所述目标雷达点云对应的目标深度信息;
    第一确定模块,用于将所述目标深度信息处的位置信息确定为所述第二位置信息。
  12. 根据权利要求11所述的装置,其特征在于,所述确定单元还包括:
    识别模块,用于在将所述目标深度信息处的位置信息确定为所述第二位置信息之前,基于计算机视觉算法识别所述实景泊车图像中的可泊车位作为候选可泊车位;
    所述第一确定模块,用于在所述候选可泊车位的深度信息中包含所述目标深度信息的情况下,将所述目标深度信息处的位置信息确定为所述第二位置信息。
  13. 根据权利要求12所述的装置,其特征在于,所述确定单元还包括:
    第二确定模块,用于在所述候选可泊车位的深度信息中不包含所述目标深度信息的情况下,根据所述目标雷达点云周围的雷达点云确定第一参考物特征;
    所述识别模块,还用于基于所述计算机视觉算法识别各个所述候选可泊车位周围的第二参考物特征;
    所述第一确定模块,还用于将目标候选可泊车位的位置信息确定为所述第二位置信息,所述目标候选可泊车位为与所述第一参考物特征匹配成功的所述第二参考物特征对应的所述候选可泊车位。
  14. 根据权利要求10所述的装置,其特征在于,所述突出显示单元,还用于在根据所述第一位置信息在所述虚拟泊车图像中突出显示所述目标可泊车位,以及根据所述第二位置信息在所述实景泊车图像中突出显示所述目标可泊车位之后,在泊入阶段,在所述实景泊车图像中突出显示自车到所述第二位置信息的规划路径。
  15. 根据权利要求10所述的装置,其特征在于,所述输出显示单元,用于在接收到自动泊车系统开启指令后,判断是否存在第一车辆部件,所述第一车辆部件为当前实体 状态不满足自动泊车条件的车辆部件;在存在所述第一车辆部件的情况下,将初始生成的虚拟泊车图像中自车模型的第一车辆部件的虚拟状态调整为所述第一车辆部件的所述当前实体状态;输出显示状态调整后的所述虚拟泊车图像,并突出显示所述第一车辆部件;
    所述装置还包括:
    第一输出提示单元,用于输出第一泊车提示信息,所述第一泊车提示信息用于提示驾驶员将所述第一车辆部件的所述当前实体状态调整为满足所述自动泊车条件时的状态。
  16. 根据权利要求10所述的装置,其特征在于,所述突出显示单元,还用于在根据所述第一位置信息在所述虚拟泊车图像中突出显示所述目标可泊车位,以及根据所述第二位置信息在所述实景泊车图像中突出显示所述目标可泊车位之后,在泊入阶段存在第二车辆部件的情况下,随着所述第二车辆部件实体状态的变化更新所述虚拟泊车图像中自车模型的所述第二车辆部件的虚拟状态,并在显示更新后的所述虚拟泊车图像时,突出显示所述第二车辆部件,所述第二车辆部件为泊车过程中实体状态发生可视化动态变化的车辆部件;和/或,在泊入阶段存在第三车辆部件的情况下,在显示当前生成的所述虚拟泊车图像时,突出显示所述第三车辆部件,所述第三车辆部件为发生泊车操作错误的车辆部件;
    所述装置还包括:
    第二输出提示单元,用于在突出显示所述第三车辆部件时,输出第二泊车提示信息,所述第二泊车提示信息用于提示驾驶员泊车操作错误。
  17. 根据权利要求10-16中任一项所述的装置,其特征在于,所述输出显示单元,包括:
    第一输出显示模块,用于在启动自动泊车系统后,输出显示当前生成的所述虚拟泊车图像;
    第二输出显示模块,用于当接收到寻库指令或者档位切换至R档时,在输出显示当前生成的所述虚拟泊车图像的基础上,输出显示当前生成的所述实景泊车图像。
  18. 一种存储介质,其上存储有计算机程序,其特征在于,所述程序被处理器执行时实现如权利要求1-9中任一项所述的方法。
  19. 一种电子设备,其特征在于,所述电子设备包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述电子设备实现如权 利要求1-9中任一项所述的方法。
  20. 一种车辆,其特征在于,所述车辆包含如权利要求10-17中任一项所述的装置,或者包含如权利要求19所述的电子设备。
PCT/CN2022/100663 2022-06-17 2022-06-23 泊车交互方法、装置、存储介质、电子设备及车辆 WO2023240667A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/482,086 US20240034346A1 (en) 2022-06-17 2023-10-06 Parking interaction method and apparatus, storage medium, electronic device and vehicle

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210684593.2 2022-06-17
CN202210684593.2A CN117284278A (zh) 2022-06-17 2022-06-17 泊车交互方法、装置、存储介质、电子设备及车辆

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/482,086 Continuation US20240034346A1 (en) 2022-06-17 2023-10-06 Parking interaction method and apparatus, storage medium, electronic device and vehicle

Publications (1)

Publication Number Publication Date
WO2023240667A1 true WO2023240667A1 (zh) 2023-12-21

Family

ID=89192899

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/100663 WO2023240667A1 (zh) 2022-06-17 2022-06-23 泊车交互方法、装置、存储介质、电子设备及车辆

Country Status (3)

Country Link
US (1) US20240034346A1 (zh)
CN (1) CN117284278A (zh)
WO (1) WO2023240667A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103101496A (zh) * 2011-11-14 2013-05-15 现代摩比斯株式会社 利用图像的泊车引导系统及该泊车引导方法
CN104571101A (zh) * 2013-10-17 2015-04-29 厦门英拓通讯科技有限公司 一种可实现车辆任意位置移动的系统
EP3318470A1 (de) * 2016-11-02 2018-05-09 Valeo Schalter und Sensoren GmbH Verfahren zur auswahl einer parkzone aus einer mehrzahl von parkzonen für ein kraftfahrzeug innerhalb eines parkbereichs, parkassistenzsystem für ein kraftfahrzeug und kraftfahrzeug mit einem parkassistenzsystem
CN109693666A (zh) * 2019-02-02 2019-04-30 中国第一汽车股份有限公司 一种用于泊车的人机交互系统及泊车方法
CN111891119A (zh) * 2020-06-28 2020-11-06 东风汽车集团有限公司 一种自动泊车控制方法及系统
CN113715810A (zh) * 2021-10-15 2021-11-30 广州小鹏汽车科技有限公司 泊车方法、泊车装置、车辆及可读存储介质

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103101496A (zh) * 2011-11-14 2013-05-15 现代摩比斯株式会社 利用图像的泊车引导系统及该泊车引导方法
CN104571101A (zh) * 2013-10-17 2015-04-29 厦门英拓通讯科技有限公司 一种可实现车辆任意位置移动的系统
EP3318470A1 (de) * 2016-11-02 2018-05-09 Valeo Schalter und Sensoren GmbH Verfahren zur auswahl einer parkzone aus einer mehrzahl von parkzonen für ein kraftfahrzeug innerhalb eines parkbereichs, parkassistenzsystem für ein kraftfahrzeug und kraftfahrzeug mit einem parkassistenzsystem
CN109693666A (zh) * 2019-02-02 2019-04-30 中国第一汽车股份有限公司 一种用于泊车的人机交互系统及泊车方法
CN111891119A (zh) * 2020-06-28 2020-11-06 东风汽车集团有限公司 一种自动泊车控制方法及系统
CN113715810A (zh) * 2021-10-15 2021-11-30 广州小鹏汽车科技有限公司 泊车方法、泊车装置、车辆及可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YI WEN, DING ZONG-YANG; LI ZE-BIN; SUN GUO-ZHENG; HE BAN-BEN: "Panoramic parking system based on virtual reality technology and its extended application", AUTO SCI-TECH, vol. 2, no. 2, 25 March 2020 (2020-03-25), pages 2 - 9, XP093118288, ISSN: 1005-2550, DOI: 10.3969/j.issn.1005-2550.2020.02.001 *

Also Published As

Publication number Publication date
CN117284278A (zh) 2023-12-26
US20240034346A1 (en) 2024-02-01

Similar Documents

Publication Publication Date Title
US11127373B2 (en) Augmented reality wearable system for vehicle occupants
US11273821B2 (en) Parking assistance method and parking assistance device
US20120087546A1 (en) Method and device for determining processed image data about a surround field of a vehicle
CN113119956B (zh) 一种基于自动驾驶的交互方法和装置
CN112334908A (zh) 用于自主车辆的短语识别模型
US20210356257A1 (en) Using map information to smooth objects generated from sensor data
US11643070B2 (en) Parking assist apparatus displaying perpendicular-parallel parking space
JP2019014300A (ja) 車両制御システム、車両制御方法、およびプログラム
WO2021253955A1 (zh) 一种信息处理方法、装置、车辆以及显示设备
GB2510698A (en) Driver assistance system
WO2023240667A1 (zh) 泊车交互方法、装置、存储介质、电子设备及车辆
WO2024001554A1 (zh) 车辆导航方法、装置、设备、存储介质和计算机程序产品
CN112874511A (zh) 汽车的自动驾驶控制方法、装置及计算机存储介质
EP4358524A1 (en) Mr service platform for providing mixed reality automotive meta service, and control method therefor
CN116136418A (zh) 导航引导信息生成方法、导航引导方法、程序产品和介质
CN107908356B (zh) 基于智能设备的信息常驻的方法和装置
CN115416486A (zh) 车辆变道信息显示方法、装置、电子设备及存储介质
CN117794801A (zh) 具有平滑切换、停车完成或停车校正的停车辅助
CN111063214A (zh) 一种车辆的定位方法、车载设备及存储介质
RU2793737C1 (ru) Способ интеллектуальной парковки и устройства для его реализации
WO2024087959A1 (zh) 车辆驾驶状态的展示方法、装置、电子设备和存储介质
US20240140403A1 (en) Moving body control device, moving body control method, and moving body control program
JP7255429B2 (ja) 表示制御装置および表示制御プログラム
JP7159851B2 (ja) 運転支援装置、車両、情報提供装置、運転支援システム、及び運転支援方法
US20240083459A1 (en) Vehicle control device and vehicle control method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22930145

Country of ref document: EP

Kind code of ref document: A1