WO2023072093A1 - 虚拟车位确定方法、显示方法、装置、设备、介质及程序 - Google Patents

虚拟车位确定方法、显示方法、装置、设备、介质及程序 Download PDF

Info

Publication number
WO2023072093A1
WO2023072093A1 PCT/CN2022/127434 CN2022127434W WO2023072093A1 WO 2023072093 A1 WO2023072093 A1 WO 2023072093A1 CN 2022127434 W CN2022127434 W CN 2022127434W WO 2023072093 A1 WO2023072093 A1 WO 2023072093A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
parking
user interface
parking space
target
Prior art date
Application number
PCT/CN2022/127434
Other languages
English (en)
French (fr)
Inventor
周磊
蔡佳
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP22885970.8A priority Critical patent/EP4414965A1/en
Publication of WO2023072093A1 publication Critical patent/WO2023072093A1/zh
Priority to US18/645,689 priority patent/US20240296737A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D15/00Steering not otherwise provided for
    • B62D15/02Steering position indicators ; Steering position determination; Steering aids
    • B62D15/027Parking aids, e.g. instruction means
    • B62D15/0285Parking performed automatically
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • G08G1/141Traffic control systems for road vehicles indicating individual free spaces in parking areas with means giving the indication of available parking spaces
    • G08G1/144Traffic control systems for road vehicles indicating individual free spaces in parking areas with means giving the indication of available parking spaces on portable or mobile units, e.g. personal digital assistant [PDA]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/06Automatic manoeuvring for parking
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/04Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • G08G1/141Traffic control systems for road vehicles indicating individual free spaces in parking areas with means giving the indication of available parking spaces
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • G08G1/141Traffic control systems for road vehicles indicating individual free spaces in parking areas with means giving the indication of available parking spaces
    • G08G1/143Traffic control systems for road vehicles indicating individual free spaces in parking areas with means giving the indication of available parking spaces inside the vehicles

Definitions

  • the present application relates to the technical field of automatic parking, and in particular to a virtual parking space determination method, display method, device, equipment, medium and program.
  • Automatic parking technology is a technology that automatically parks a vehicle into a parking space by detecting the actual environment around the vehicle in real time. In the process of using the automatic parking technology to park a vehicle into a parking space, it is necessary to determine a virtual parking space, and then realize automatic parking based on the virtual parking space. Therefore, how to determine the virtual parking space has become an urgent problem to be solved.
  • the present application provides a method for determining a virtual parking space, a display method, a device, a device, a medium and a program, which can determine a virtual parking space and realize automatic parking. Described technical scheme is as follows:
  • a method for determining a virtual parking space is provided.
  • the environment information around the target vehicle is obtained, the target vehicle is a vehicle to be parked, and the environment information includes the parking of one or more parked vehicles. information; determining a reference vehicle based on the parking information of the one or more parked vehicles, the reference vehicle being one of the one or more parked vehicles; determining a target virtual parking space based on the parking information of the reference vehicle,
  • the target virtual parking space is used to indicate the parking position and parking direction of the target vehicle.
  • a vehicle is selected from one or more parked vehicles around the target vehicle as a reference vehicle, and the target virtual parking space is determined based on the parking direction of the reference vehicle, which can ensure that the target vehicle is based on the virtual parking space.
  • After automatic parking it forms a consistent arrangement with the selected reference vehicle, thereby improving the orderliness and convenience of parking. Determining the reference vehicle first, and then determining the target virtual parking space based on the reference vehicle can determine the virtual parking space more quickly and accurately. No matter the reference vehicle selected by the user or the reference vehicle determined by the system, the parking position and parking direction of the reference vehicle are different.
  • the environment information around the target vehicle includes at least one of visual data and radar data, and the radar data includes ultrasonic radar data, laser radar data and millimeter wave radar data. That is to say, the technical solution provided by this application can be applied to at least one kind of data, thereby improving the scope of application of the technical solution provided by this application.
  • the parked vehicle can be located in a non-marked parking area, such as a non-marked parking lot, the entrance of a hotel, on both sides of a road or aisle, etc.; it can also be parked in a marked parking space, especially The parked vehicle is not parked in the area indicated by the marked parking space, which affects the normal parking of the target vehicle into the adjacent parking space of the parked vehicle according to the marked parking space.
  • a non-marked parking area such as a non-marked parking lot, the entrance of a hotel, on both sides of a road or aisle, etc.
  • the parked vehicle is not parked in the area indicated by the marked parking space, which affects the normal parking of the target vehicle into the adjacent parking space of the parked vehicle according to the marked parking space.
  • the vehicle-mounted surround-view camera is used to collect the actual environment around the target vehicle to obtain visual data around the target vehicle, such as a surround-view image.
  • Use sensors such as ultrasonic radar, laser radar, and millimeter-wave radar to collect the actual environment around the target vehicle to obtain radar data around the target vehicle.
  • the surround view image includes parking information of the one or more parked vehicles.
  • the multiple parked vehicles include two or more parked vehicles.
  • a first user interface is displayed, and the first user interface includes the parking position and parking direction of the one or more parked vehicles, and the parking position and parking direction of the one or more parked vehicles Determined according to parking information of the one or more parked vehicles.
  • displaying a second user interface including the reference vehicle the first operation indicating selection of the reference vehicle from the one or more parked vehicles .
  • the user triggers the first operation based on the first user interface.
  • the electronic device detects the user's first operation, in response to the user's first operation, a second user interface is displayed.
  • the second user interface includes a reference vehicle, so that the one or more parked vehicles can Determine the reference vehicle.
  • the user can know the environmental information around the target vehicle through the first user interface, and then can refer to the environmental information around the target vehicle , select a reference vehicle from the one or more parked vehicles, so that the reference vehicle finally selected by the user meets the actual needs of the user, thereby satisfying the personalized needs of the user.
  • first user interface There are many forms of the first user interface.
  • the way for the user to select the reference vehicle based on the first user interface is also different, and the following will introduce them respectively.
  • a surround view image around the target vehicle and a vehicle selection area are displayed in the first user interface, the vehicle selection area includes one or more operation signs, and the one or more operation signs are related to the one or more parked Vehicles correspond one by one.
  • a first operation by the user on any one of the one or more operation identifiers is detected, and a second user interface is displayed in response to the first operation by the user.
  • the user triggers the first operation with any one of the one or more operation identifications included in the vehicle selection area.
  • the parked vehicle corresponding to any operation identifier is determined as the reference vehicle, and the second user interface is displayed.
  • the surround-view image around the target vehicle is a real environment map
  • the user can learn the environment information around the target vehicle more intuitively.
  • the target vehicle is also included in the surround view image.
  • the first user interface further includes an icon for indicating the target vehicle.
  • the second user interface may also include the target vehicle, and the second user interface may also include an icon for indicating the target vehicle.
  • look-around image is a two-dimensional look-around image or a three-dimensional look-around image.
  • the parking position and the parking direction of the one or more parked vehicles are determined, according to the parking position and the parking position of the one or more parked vehicles direction, displaying one or more virtual vehicle models in the first user interface, the one or more virtual vehicle models corresponding to the one or more parked vehicles one by one.
  • a user's first operation on any one of the one or more virtual vehicle models is detected, and a second user interface is displayed in response to the user's first operation.
  • the user triggers a first operation on any one of the one or more virtual vehicle models.
  • the parked vehicle corresponding to any virtual vehicle model is determined as the reference vehicle, and the second user interface is displayed.
  • the user can directly operate the virtual vehicle models without providing a separate vehicle selection area, and it is not necessary for the user to confirm which operation identifier in the vehicle selection area corresponds to Which parked vehicle can improve the efficiency of determining the reference vehicle.
  • the first user interface further includes a virtual vehicle model corresponding to the target vehicle, and the first user interface further includes an icon for indicating the target vehicle.
  • the second user interface may also include a virtual vehicle model corresponding to the target vehicle, and the second user interface may also include an icon for indicating the target vehicle.
  • the aforementioned virtual vehicle model may be a two-dimensional virtual vehicle model, or may be a three-dimensional virtual vehicle model.
  • the user's first operation includes any one of the user's touch, tap and slide actions on the first user interface.
  • the user selects a reference vehicle by touching the virtual vehicle model, or taps the virtual vehicle model to select the reference vehicle, or slides the virtual vehicle model to select the reference vehicle.
  • the user selects the reference vehicle by touching the operation sign, or taps the operation sign to select the reference vehicle, or slides the operation sign to select the reference vehicle, which is not limited in this application.
  • the second user interface only includes the reference vehicle, that is, the second user interface does not include other parked vehicles.
  • the second user interface includes not only the reference vehicle, but also other parked vehicles.
  • the second user interface further includes a second vehicle, and the second vehicle is any vehicle in the one or more parked vehicles except the reference vehicle, and the display manner of the reference vehicle is different from the display manner of the second vehicle.
  • the display color of the reference vehicle is different from that of other parked vehicles, or the outline of the reference vehicle has a different thickness than the outlines of other parked vehicles, or the background texture of the reference vehicle is different from that of other parked vehicles.
  • the background texture is different, etc.
  • the user can visually distinguish the reference vehicle included in the second user interface from other parked vehicles.
  • the second user interface further includes an indicator, which is used to indicate the reference vehicle.
  • the electronic device inputs the surround view image to the vehicle detection model to obtain the first vehicle's
  • the parking position and the partial image is the image area where the first vehicle is located in the surround view image. Afterwards, according to the following steps (1)-(2), the parking direction of the first vehicle is determined.
  • the parking information of the first vehicle is a partial image of the first vehicle
  • the partial image of the first vehicle is input to the key information detection model to obtain multiple key points of the first vehicle output by the key information detection model attribute information and attribute information of multiple keylines.
  • the attribute information of the key point includes at least one of the position of the key point, the category of the key point, and the visibility of the key point, and the visibility of the key point is used to indicate whether the corresponding key point is blocked.
  • the attribute information of the keyline includes at least one of the position of the central point of the keyline, the visibility of the keyline, the inclination of the keyline, and the length of the keyline. The visibility of the keyline is used to indicate whether the corresponding keyline is blocked.
  • the key points include the center points of the four wheels, the center point of the vehicle body, the center point of the car logo, the center points of the two taillights, and so on.
  • the key line of the first vehicle includes the vertical centerline at the position where the license plate is installed at the front and rear of the vehicle, the vertical centerline between the logo and the roof of the vehicle, and the like.
  • the attribute information of multiple key points and the attribute information of multiple keylines of the first vehicle are input into the pose estimation model, so as to obtain the image coordinate system of the first vehicle output by the pose estimation model in the local image
  • the parking direction of the first vehicle in the image coordinate system of the partial image is converted to the body coordinate system of the target vehicle to obtain the parking direction of the first vehicle.
  • the parking direction includes the heading of the vehicle and the direction of the vehicle body.
  • the parking direction output by the pose estimation model includes not only the heading of the vehicle and the direction of the vehicle body, but also the angle of the vehicle body. Since the external parameters of the vehicle-mounted surround-view camera will have a certain impact on the vehicle body angle, after the attitude estimation model outputs the vehicle body angle, it is necessary to perform external parameter compensation on the basis of the vehicle body angle.
  • the compensation angle is determined, and the compensation angle is the angle between the line connecting the focus of the vehicle-mounted surround-view camera and the center point of the first vehicle and the imaging plane of the vehicle-mounted surround-view camera.
  • the body angle output by the attitude estimation model is added to the compensation angle to obtain the body angle of the first vehicle in the image coordinate system of the partial image.
  • the parking direction of the first vehicle in the image coordinate system of the partial image is transformed into the body coordinate system of the target vehicle, so as to obtain the body direction of the first vehicle.
  • the technical solution provided by this application determines the parking direction of the vehicle through the attribute information of key points and key lines, and for the same vehicle, the attribute information of different key points and key lines can be obtained relatively easily through simulation data, CAD, etc. , and then a large number of samples can be obtained, and the key information detection model and attitude estimation model can be trained through these samples, which can improve the accuracy and robustness of determining the parking direction of the vehicle.
  • the present application can to determine the parking direction of the first vehicle. That is, multiple surround-view images are fused to determine the parking direction of the first vehicle.
  • the parking position and the parking direction of the one or more parked vehicles are determined based on the parking information of the one or more parked vehicles. Based on the parking positions and parking directions of the one or more parked vehicles, the reference vehicle is determined using a preset model.
  • the parking position and the parking direction of the one or more parked vehicles are determined.
  • a parking space is determined based on the parking positions of the one or more parked vehicles, and the parking space is an area in the parking area except the parking positions of the one or more parked vehicles. Determining the distance between the target vehicle and the parking space, and determining the direction of travel of the target vehicle, combining the distance between the target vehicle and the parking space, the direction of travel of the target vehicle, and the direction of travel of the one or more parked vehicles.
  • the parking orientation is input to the pre-set model to determine the reference vehicle.
  • the preset model is obtained by training based on multiple sample vehicles in advance, for example, by means of reinforcement learning.
  • the implementation process of determining the parking position and parking direction of the one or more parked vehicles refers to the relevant description in the above-mentioned first implementation mode, and will not be repeated here. .
  • an implementation manner of determining a parking space based on the parking positions of the one or more parked vehicles will be described below, and the discussion will not be discussed here.
  • the present application can not only determine the reference vehicle through a preset model, but also determine the reference vehicle according to the parking posture rules. That is, based on the parking information of the one or more parked vehicles, the parking position and the parking direction of the one or more parked vehicles are determined. Based on the parking positions and parking orientations of the one or more parked vehicles, a reference vehicle is determined using a parking attitude rule.
  • the parking attitude rule refers to a rule for determining the reference vehicle according to the priority of the vehicle body direction.
  • the order of priority of the vehicle body direction from high to low is vertical direction, horizontal direction, and oblique direction. That is, if there is a parked vehicle whose body direction is in the vertical direction among the one or more parked vehicles, then the parked vehicle whose body direction is in the vertical direction is determined as the reference vehicle. If there is no parked vehicle whose body direction is vertical in the one or more parked vehicles, but there is a parked vehicle whose body direction is horizontal, then the parked vehicle whose body direction is horizontal is determined as the reference vehicle .
  • the body A parked vehicle with an oblique orientation is determined as the reference vehicle.
  • a vehicle is randomly selected as the reference vehicle, or a vehicle is selected as the reference vehicle according to other rules, for example, the vehicle closest to the target vehicle is selected as the reference vehicle Vehicle, this application is not limited to this.
  • the reference vehicle after determining the parking position and parking direction of one or more parked vehicles, the reference vehicle can be automatically determined by using the preset model or parking posture rules, which can avoid the user's manual A reference vehicle is selected, thereby simplifying the user's operation.
  • the parking direction of the target vehicle includes the head orientation and the direction of the vehicle body of the target vehicle.
  • the body direction of the target vehicle is the direction of the body of the target vehicle relative to a reference object, and the reference object includes a road baseline, a reference vehicle or other reference objects.
  • the direction of the body of the target vehicle is parallel, perpendicular, or inclined to the body of the reference vehicle.
  • the implementation manner of determining the target virtual parking space based on the parking information of the reference vehicle includes: determining the parking direction of the reference vehicle based on the parking information of the reference vehicle. An available parking space is determined based on the parking information of the one or more parked vehicles. Based on the parking direction of the reference vehicle and the parking space, the target virtual parking space is determined.
  • the ground area in the look-around image is extracted, the feature of each pixel in the plurality of pixels included in the ground area is extracted, and based on the features of the plurality of pixels, the plurality of pixels are clustered, A plurality of areas are obtained, a parking area is determined from the plurality of areas, and a parking space in the parking area is determined based on the parking information of the one or more parked vehicles.
  • the surround view image is used as the input of the ground segmentation model to obtain the ground area output by the ground segmentation model.
  • the ground area is used as an input of the feature extraction model to obtain features of a plurality of pixel points included in the ground area output by the feature extraction model.
  • the multiple pixel points are clustered to obtain multiple regions. Determine the regional characteristics corresponding to each of the multiple regions, and determine the semantic category of each of the multiple regions based on the regional characteristics of the multiple regions.
  • the area whose semantic category is the parking category is determined as the parking area, and based on the parking information of the one or more parked vehicles, from the parking Parking spaces are determined in the area. If there is no area whose semantic category is the parking category in the plurality of areas, then based on the area characteristics and semantic categories of the plurality of areas, determine the parking area from the plurality of areas, and based on the one or more parked vehicles The available parking information is determined from the parking area.
  • the ground area includes a parking area, a road area, a manhole cover area, a lawn area, and the like.
  • Clustering the multiple pixel points based on the features of the multiple pixel points refers to dividing the pixel points with similar distances between the features into one region, thereby obtaining multiple regions.
  • the features of all the pixels included in the region are averaged to obtain the region characteristics corresponding to the region.
  • the features of all the pixels included in the area are fused to obtain the area features corresponding to the area, for example, the features of all the pixels included in the area are combined into a matrix, and the matrix is used as the area feature of the area.
  • the realization process of determining the semantic category of each region in the plurality of regions based on the region characteristics of the plurality of regions includes: for each region in the plurality of regions, determining the region characteristic corresponding to the region and the stored region characteristic and For the distances between the regional features in the correspondence between the semantic categories, the semantic category corresponding to the regional feature with the closest distance between the regional features corresponding to the region is determined as the semantic category of the region.
  • the present application can perform multi-frame fusion through multiple surround-view images. That is, for the plurality of surround-view images, the ground area in each surround-view image is determined according to the above method, so as to obtain multiple ground areas. Then, the overlapping areas among the plurality of ground areas are acquired, and then, the features of each pixel in the overlapping areas are extracted according to the above method, and clustering is performed to determine the parking space.
  • a plurality of candidate virtual parking spaces are determined based on the parking direction of the reference vehicle and a parking space, and a target virtual parking space is determined from the plurality of candidate virtual parking spaces in response to a second user operation.
  • a plurality of candidate virtual parking spaces are determined, and a fourth user interface is displayed, where the fourth user interface includes the plurality of candidate virtual parking spaces.
  • a third user interface is displayed, and the third user interface includes the target virtual parking space.
  • the fourth user interface is displayed.
  • the user triggers a second operation on the fourth user interface to determine a target virtual parking space from the plurality of candidate virtual parking spaces.
  • the third user interface further displays a parking space, and the target virtual parking space is located in the parking space.
  • the implementation process of determining multiple candidate virtual parking spaces includes: taking the parking direction of the reference vehicle as the parking direction of the target vehicle, and determining multiple candidate virtual parking spaces in the parking space , so that the parking direction indicated by the plurality of candidate virtual parking spaces is the parking direction of the target vehicle. That is, the parking direction of the reference vehicle is directly used as the parking direction of the target vehicle, and then multiple candidate virtual parking spaces are determined in the parking space.
  • the user may be dissatisfied with the parking direction of the reference vehicle, so the electronic device displays a second user interface, the second user interface includes the reference vehicle, the second user interface can also indicate the parking direction of the reference vehicle, and the parking direction of the reference vehicle is used as Refer to the parking direction.
  • the third operation is used to adjust the reference parking direction, and determine the adjusted parking direction as the parking direction of the target vehicle. Based on the parking direction of the target vehicle, a plurality of candidate virtual parking spaces are determined in the parking space, so that the parking direction indicated by the plurality of candidate virtual parking spaces is the parking direction of the target vehicle.
  • the multiple parked vehicles may be distributed on one side of the driving road of the target vehicle, or may be distributed on both sides of the driving road of the target vehicle.
  • the target virtual vehicle is determined based on the parking information of reference vehicles on both sides of the driving road of the target vehicle. That is, according to the above method, one reference vehicle can be respectively determined from the parked vehicles on both sides of the driving road of the target vehicle.
  • a parking space is respectively determined from both sides of the driving road according to the above method, and then based on the reference vehicles on both sides of the driving road of the target vehicle, how many parking spaces are respectively determined in the parking spaces on both sides of the driving road.
  • multiple virtual vehicle models are used to represent multiple candidate virtual parking spaces, or multiple candidate virtual parking spaces are represented by black rectangular boxes or other display methods.
  • the vehicle head orientations corresponding to the multiple candidate virtual parking spaces may also be displayed.
  • candidate virtual parking spaces are determined based on the parking direction of the reference vehicle and the parking space available. If the number of candidate virtual parking spaces is one, directly use the candidate virtual parking spaces as the target virtual parking spaces. If there are multiple candidate virtual parking spaces, one candidate virtual parking space is selected from the plurality of candidate virtual parking spaces as the target virtual parking space.
  • the manner of determining the candidate virtual parking spaces based on the parking direction and the parking space of the reference vehicle refers to the above-mentioned first implementation manner, which will not be repeated here.
  • a candidate virtual parking space is selected from the plurality of candidate virtual parking spaces as a target virtual parking space and recommended to the user.
  • the multiple candidate virtual parking spaces are recommended to the user, and the user selects one candidate virtual parking space as the target virtual parking space.
  • the distance between the current position of the target vehicle and the candidate virtual parking space can be combined to select a distance from the multiple candidate virtual parking spaces
  • the nearest candidate virtual parking space is recommended to the user as the target virtual parking space, and of course a candidate virtual parking space can be selected and recommended to the user in other ways.
  • a fifth user interface is displayed, and the fifth user interface includes recommended virtual parking spaces.
  • a third user interface is displayed, and the fourth operation is used to instruct the user to confirm that the recommended virtual parking space is used as the target virtual parking space.
  • a fifth user interface is displayed, and the fifth user interface includes recommended virtual parking spaces.
  • a fourth user interface is displayed, the fourth user interface includes the plurality of candidate virtual parking spaces, and the fifth operation is used to indicate that the user is not satisfied with the recommended parking position of the virtual parking spaces.
  • a third user interface is displayed, and the second operation is used to select a target virtual parking space from the plurality of candidate virtual parking spaces.
  • a candidate virtual parking space when a candidate virtual parking space is selected from a plurality of candidate virtual parking spaces and recommended to the user as a target virtual parking space, the user may directly accept the recommended virtual parking space, and the recommended virtual parking space will be used as the target virtual parking space.
  • the user may be dissatisfied with the parking position of the recommended virtual parking space. In this case, all the candidate virtual parking spaces need to be recommended to the user, and the user selects a candidate virtual parking space as the target virtual parking space.
  • the second user interface further includes a parking space, and in response to the user's sixth operation, the sixth operation is used to select a location from the parking space as the parking location of the target vehicle. Based on the parking direction of the reference vehicle and the parking position of the target vehicle, the target virtual parking space is determined.
  • the user selects a location in the parking space as the parking location of the target vehicle, and then determines the target virtual parking space based on the parking direction of the reference vehicle and the parking location of the target vehicle.
  • the parking direction of the reference vehicle can be directly used as the parking direction of the target vehicle, and the target virtual parking space can be determined at the parking position of the target vehicle in the parking space , so that the parking direction indicated by the target virtual parking space is the parking direction of the target vehicle.
  • the electronic device displays a second user interface, the second user interface includes the reference vehicle, the second user interface can also indicate the parking direction of the reference vehicle, and the parking direction of the reference vehicle is used as Refer to the parking direction.
  • the third operation is used to adjust the reference parking direction, and determine the adjusted parking direction as the parking direction of the target vehicle.
  • the target virtual parking space is determined at the parking position of the target vehicle in the parking space, so that the parking direction indicated by the target virtual parking space is the parking direction of the target vehicle.
  • the above content is the implementation process of determining the target virtual parking space when there are parked vehicles around the target vehicle. In some cases, there may be no parked vehicles around the target vehicle.
  • the electronic device performs three-dimensional space measurement on the parking space to determine the depth of the parking space. Then, based on the ratio between the depth of the parking space and the body length of the target vehicle, the parking direction of the target vehicle is determined. And determine the parking position of the target vehicle in the parking space, and then determine the target virtual parking space.
  • the ratio between the depth of the parking space and the body length of the target vehicle is greater than a first ratio threshold, it is determined that the direction of the body of the target vehicle is a vertical direction relative to the road baseline. If the ratio between the depth of the parking space and the body length of the target vehicle is smaller than the second ratio threshold, it is determined that the direction of the body of the target vehicle is horizontal relative to the road baseline. If the ratio between the depth of the parking space and the body length of the target vehicle is less than the first ratio threshold but greater than the second ratio threshold, it is determined that the body direction of the target vehicle is oblique relative to the road baseline, and its oblique angle is The arcsine of the depth of the vehicle space and the length of the target vehicle body.
  • the first ratio threshold and the second ratio threshold are preset and can be adjusted according to different requirements.
  • the first ratio threshold is 0.9
  • the second ratio threshold is 0.7.
  • the above method when determining the parking position of the target vehicle in the parking space, the above method can be referred to, and the user selects a position in the parking space as the parking position of the target vehicle.
  • the electronic device can also refer to the above method to determine multiple candidate virtual parking spaces in the parking space according to the parking direction of the target vehicle, and the user selects a candidate virtual parking space as the target virtual parking space.
  • the method for the user to select a location in the parking space as the parking location of the target vehicle, and the method for the user to select a candidate virtual parking space as the target virtual parking space from multiple candidate virtual parking spaces refer to the previous description, and will not be repeated here. .
  • a display method for assisting parking is provided.
  • a first user interface is displayed, and the first user interface is used for displaying environmental information around a target vehicle, and the target vehicle is to be For parked vehicles, the environmental information includes parking information of one or more parked vehicles.
  • a second user interface is displayed, the second user interface including a reference vehicle, the reference vehicle being one of the one or more parked vehicles.
  • a third user interface is displayed, the third user interface includes a target virtual parking space, and the target virtual parking space is used to indicate a parking position and a parking direction of the target vehicle.
  • the second user interface further includes a second vehicle, and the second vehicle is any vehicle in the one or more parked vehicles except the reference vehicle; the display mode of the reference vehicle is the same as The second vehicle is displayed differently.
  • the second user interface further includes an indicator, where the indicator is used to indicate the reference vehicle.
  • the displaying the third user interface includes: displaying a fourth user interface, the fourth user interface including a plurality of candidate virtual parking spaces; in response to the second operation of the user, displaying the third user interface , the target virtual parking space is one of the plurality of candidate virtual parking spaces.
  • the third user interface further displays a parking space, and the target virtual parking space is located in the parking space.
  • the first user interface includes one or more operation identifiers, and the one or more operation identifiers are in one-to-one correspondence with the one or more parked vehicles.
  • the environment information displayed on the first user interface is image information acquired by a camera or radar.
  • the environment information displayed on the first user interface is virtual environment information generated according to information acquired by sensors.
  • the third user interface further includes an icon for indicating the target vehicle.
  • the user's first operation includes any one of the user's touch, tap and slide actions on the first user interface.
  • a device for determining a virtual parking space has the function of realizing the behavior of the method for determining a virtual parking space in the above first aspect.
  • the device includes at least one module, and the at least one module is used to implement the method for determining a virtual parking space provided in the first aspect above.
  • a display device for assisting parking has the function of realizing the behavior of the display method for assisting parking in the second aspect above.
  • the device includes at least one module, and the at least one module is used to implement the display method for assisting parking provided in the second aspect above.
  • an electronic device in a fifth aspect, includes a processor and a memory, and the memory is used to store a computer program for executing the method for determining a virtual parking space provided in the first aspect above.
  • the processor is configured to execute the computer program stored in the memory, so as to realize the virtual parking space determination method described in the first aspect above.
  • the electronic device may further include a communication bus, which is used to establish a connection between the processor and the memory.
  • an electronic device includes a processor and a memory, and the memory is used to store a computer program for executing the display method for assisting parking provided in the second aspect above.
  • the processor is configured to execute the computer program stored in the memory, so as to realize the display method for assisting parking described in the second aspect above.
  • the electronic device may further include a communication bus, which is used to establish a connection between the processor and the memory.
  • a computer-readable storage medium is provided. Instructions are stored in the storage medium. When the instructions are run on a computer, the computer is made to execute the steps of the method for determining a virtual parking space in the first aspect above.
  • a computer-readable storage medium is provided. Instructions are stored in the storage medium. When the instructions are run on a computer, the computer is made to execute the display for assisting parking described in the second aspect above. method steps.
  • a computer program product containing instructions is provided, and when the instructions are run on a computer, the computer is made to execute the steps of the method for determining a virtual parking space described in the first aspect above.
  • a computer program is provided, and when the computer program is run on a computer, the computer is made to execute the steps of the method for determining a virtual parking space described in the first aspect above.
  • a computer program product containing instructions is provided, and when the instructions are run on a computer, the computer is made to execute the steps of the display method for assisting parking described in the second aspect above.
  • a computer program is provided, and when the computer program is run on a computer, the computer is made to execute the steps of the display method for assisting parking described in the second aspect above.
  • a vehicle is selected from one or more parked vehicles around the target vehicle as a reference vehicle, and the target virtual parking space is determined based on the parking direction of the reference vehicle, which can ensure that the target vehicle is based on the virtual parking space.
  • After automatic parking it forms a consistent arrangement with the selected reference vehicle, thereby improving the orderliness and convenience of parking.
  • FIG. 1 is a schematic structural diagram of an electronic device provided in an embodiment of the present application.
  • Fig. 2 is a flow chart of a method for determining a virtual parking space provided by an embodiment of the present application
  • Fig. 3 is a schematic diagram of a first user interface provided by an embodiment of the present application.
  • Fig. 4 is a schematic diagram of another first user interface provided by the embodiment of the present application.
  • Fig. 5 is a schematic diagram of a second user interface provided by an embodiment of the present application.
  • Fig. 6 is a schematic diagram of key points and key lines provided by the embodiment of the present application.
  • Fig. 7 is a flow chart of determining the parking direction of the first vehicle provided by the embodiment of the present application.
  • Fig. 8 is another flow chart for determining the parking direction of the first vehicle provided by the embodiment of the present application.
  • FIG. 9 is a flow chart for determining the semantic category of each region provided by the embodiment of the present application.
  • FIG. 10 is a flow chart for determining the semantic category of each region provided by the embodiment of the present application.
  • Fig. 11 is a schematic diagram of a user adjusting a parking direction provided by an embodiment of the present application.
  • Fig. 12 is a schematic diagram of determining multiple candidate virtual parking spaces provided by the embodiment of the present application.
  • Fig. 13 is another schematic diagram of determining multiple candidate virtual parking spaces provided by the embodiment of the present application.
  • Fig. 14 is a schematic diagram of a target virtual parking space determined based on a reference vehicle provided by an embodiment of the present application.
  • Fig. 15 is a schematic diagram of another target virtual parking space determined based on a reference vehicle provided by an embodiment of the present application.
  • Fig. 16 is another schematic diagram of determining multiple candidate virtual parking spaces provided by the embodiment of the present application.
  • Fig. 17 is another schematic diagram of determining multiple candidate virtual parking spaces provided by the embodiment of the present application.
  • Fig. 18 is a schematic diagram showing the head orientation of a virtual parking space provided by an embodiment of the present application.
  • FIG. 19 is a schematic diagram of a user selecting a target virtual parking space provided by an embodiment of the present application.
  • FIG. 20 is a schematic diagram of a user selecting a parking position of a target vehicle provided by an embodiment of the present application.
  • FIG. 21 is a schematic diagram of another user selecting a parking position of a target vehicle provided by an embodiment of the present application.
  • Fig. 22 is a flow chart of a display method for assisting parking provided by an embodiment of the present application.
  • Fig. 23 is a block diagram of a device for determining a virtual parking space provided by an embodiment of the present application.
  • Fig. 24 is a block diagram of a display device for assisting parking provided by an embodiment of the present application.
  • Fig. 25 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 26 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • FIG. 27 is a schematic structural diagram of another terminal device provided by an embodiment of the present application.
  • Virtual parking space refers to the fictional parking space created by the vehicle during automatic parking. If there is an interactive interface, the virtual parking space will be displayed on the interface. When the vehicle finally parks in the virtual parking space, the position of the vehicle and the position in the real parking area are consistent.
  • Parking direction The parking direction includes the head orientation and vehicle body direction.
  • Body direction refers to the direction of the body relative to the reference object.
  • the reference objects include road baselines, reference vehicles or other reference objects.
  • the front direction includes facing the direction of the target vehicle and facing away from the direction of the target vehicle, and the direction of the vehicle body includes 8 directions: due east, due south, due west, due north, southeast, northeast, southwest, and northwest.
  • the angle of the vehicle body which refers to the angle between the vehicle body and the reference object.
  • Marked parking spaces parking spaces marked with parking lines on the ground, or with obvious hints (such as the color of the whole block, bricks with different textures, three-dimensional limit equipment, etc.).
  • No-marked parking area A parking area without parking space lines or parking space warning signs on the ground.
  • Parking search The process by which a vehicle searches for a parking space before parking into it.
  • the virtual parking space determination method provided by the embodiment of the present application can be applied to various scenarios, such as parking space recommendation, automatic parking (auto parking asist, APA), remote remote parking (remote parking asist, RPA), automatic valet parking Automated valet parking (AVP), memory parking (home zone parking, HZP) and other assisted driving systems or automatic driving systems. Moreover, it is applicable to both marked parking spaces and non-marked parking areas. For the situation of parking areas without markings, such as parking lots without markings, entrances and exits of hotel offices, temporary parking on both sides of roads or aisles, etc., the technical solution provided by this application can be based on the parking area of vehicles in the parking area.
  • the parking information and the spatial information of the parking area automatically generate a virtual parking space, eliminating the need for the user to adjust the position of the virtual parking space multiple times.
  • the embodiment of the present application may not be affected by the indication of the marked parking space, so that the target vehicle can be parked in the parking area required by the user, for example, when the marked parking space that the user needs to park is blocked by the adjacent side
  • the target vehicle can be parked in parallel with the vehicles parked on one side by borrowing the space on the other side.
  • APA is the most common parking assistance system in life.
  • the APA system uses ultrasonic radar to obtain the surrounding environment information of the target vehicle, helping the user to search for a virtual parking space that is sufficient to park the target vehicle from the parking space, and after the user sends a parking command , realize automatic parking based on the virtual parking space.
  • RPA is developed on the basis of APA, and it is mainly used in narrow parking spaces to solve the problem that the door is difficult to open after parking.
  • the user first turns on the RPA system in the car, the RPA system searches for and determines a virtual parking space, the user sends a parking command using a remote control device outside the car, and the RPA system realizes automatic parking based on the virtual parking space.
  • AVP is a system that searches and determines a virtual parking space, realizes automatic parking based on the virtual parking space, and then sends the location information of the parking position to the user.
  • HZP means that the target vehicle first drives to a fixed parking space, determines a virtual parking space in the fixed parking space, and then parks the target vehicle in the virtual parking space. But before automatic parking, the user needs to record a fixed driving path and a fixed parking space so that the target vehicle can "learn” the process. After the "learning" is completed, the target vehicle can start from the starting point on the side of the fixed driving path Start to realize automatic parking in or out.
  • the executor of the embodiment of the present application is a vehicle-mounted terminal. That is, after the vehicle-mounted terminal determines the virtual parking space according to the method provided in the embodiment of the present application, automatic parking of the target vehicle can be realized.
  • the execution subject of the embodiment of the present application is the vehicle terminal or the parking lot management device.
  • the parking lot management device determines the virtual parking space according to the method provided in the embodiment of this application, it sends the relevant information of the virtual parking space to the vehicle-mounted terminal, and the vehicle-mounted terminal realizes the automatic parking of the target vehicle car.
  • FIG. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device includes an environment information acquisition module, a calculation module and a human-computer interaction module.
  • the environment information acquiring module is used to acquire the environment information around the target vehicle, such as the parking information of one or more parked vehicles.
  • the calculation module and the human-computer interaction module cooperate with each other to determine the target virtual parking space, such as the parking position and parking direction of the target vehicle.
  • the target virtual parking space is displayed through the human-computer interaction module.
  • an electronic device is any electronic product that can interact with a user in one or more ways such as a keyboard, a touchpad, a touch screen, a remote control, voice interaction, or a handwriting device, such as a personal computer (PC). ), mobile phones, smart phones, personal digital assistants (personal digital assistant, PDA), wearable devices, handheld computers (pocket pc, PPC), tablet computers, smart cars, etc.
  • PC personal computer
  • FIG. 2 is a flow chart of a method for determining a virtual parking space provided by an embodiment of the present application, and the method can be applied to the above-mentioned electronic device. Please refer to Fig. 2, the method includes the following steps.
  • Step 201 Obtain the environment information around the target vehicle.
  • the target vehicle is a vehicle to be parked, and the environment information around the target vehicle includes parking information of one or more parked vehicles.
  • the environmental information around the target vehicle includes at least one of visual data and radar data, and the radar data includes ultrasonic radar data, laser radar data and millimeter wave radar data. That is to say, the technical solution provided by this application can be applied to at least one kind of data, thereby improving the scope of application of the technical solution provided by this application.
  • the vehicle-mounted surround-view camera is used to collect the actual environment around the target vehicle to obtain visual data around the target vehicle, such as a surround-view image.
  • Use sensors such as ultrasonic radar, laser radar, and millimeter-wave radar to collect the actual environment around the target vehicle to obtain radar data around the target vehicle.
  • the surround view image includes parking information of the one or more parked vehicles.
  • the multiple parked vehicles include two or more parked vehicles.
  • Step 202 Determine a reference vehicle based on the parking information of the one or more parked vehicles, where the reference vehicle is one of the one or more parked vehicles.
  • a first user interface is displayed, the first user interface includes the parking position and the parking direction of the one or more parked vehicles, and the parking position and the parking direction of the one or more parked vehicles are based on the one or more Parking information for a plurality of parked vehicles is determined.
  • a second user interface is displayed, the second user interface includes the reference vehicle, and the first operation is used to instruct selection of the reference vehicle from the one or more parked vehicles.
  • the user triggers the first operation based on the first user interface.
  • the electronic device detects the user's first operation, in response to the user's first operation, a second user interface is displayed.
  • the second user interface includes a reference vehicle, so that the one or more parked vehicles can Determine the reference vehicle.
  • the user can know the environmental information around the target vehicle through the first user interface, and then can refer to the environmental information around the target vehicle , select a reference vehicle from the one or more parked vehicles, so that the reference vehicle finally selected by the user meets the actual needs of the user, thereby satisfying the personalized needs of the user.
  • first user interface There are many forms of the first user interface.
  • the way for the user to select the reference vehicle based on the first user interface is also different, and the following will introduce them respectively.
  • a surround view image around the target vehicle and a vehicle selection area are displayed in the first user interface, the vehicle selection area includes one or more operation signs, and the one or more operation signs are related to the one or more parked Vehicles correspond one by one.
  • a first operation by the user on any one of the one or more operation identifiers is detected, and a second user interface is displayed in response to the first operation by the user.
  • the user triggers the first operation with any one of the one or more operation identifications included in the vehicle selection area.
  • the parked vehicle corresponding to any operation identifier is determined as the reference vehicle, and the second user interface is displayed.
  • the surround-view image around the target vehicle is a real environment map
  • the user can learn the environment information around the target vehicle more intuitively.
  • the target vehicle is also included in the surround view image.
  • the first user interface further includes an icon for indicating the target vehicle.
  • the second user interface may also include the target vehicle, and the second user interface may also include an icon for indicating the target vehicle.
  • the user interface when the user interface displays a parked vehicle and a target vehicle, the user interface can be displayed in the above-mentioned manner. Displays an icon to indicate the target vehicle to differentiate a parked vehicle from the target vehicle. Of course, it is also possible to distinguish the parked vehicle from the target vehicle in other ways.
  • the aforementioned look-around image is a two-dimensional look-around image or a three-dimensional look-around image.
  • the first user interface is shown in FIG. 3 , which includes two areas, which are respectively an area for displaying a surround view image and a vehicle selection area.
  • the look-around image includes the target vehicle and two parked vehicles, and a triangle icon is displayed near the rear of the target vehicle.
  • the operation identification included in the vehicle selection area is represented by the license plate number.
  • the user selects a reference vehicle by selecting any license plate number, for example, in Figure 3, the user selects the vehicle with the license plate number "Shaan A ⁇ xxx12" as the reference vehicle.
  • the tail line of the vehicle is rectangular
  • the head line of the vehicle is trapezoidal. That is, the position of the rectangular line is the rear of the vehicle, and the position of the trapezoidal line is the front of the vehicle.
  • the part close to the triangle icon is the rear of the vehicle
  • the part far away from the triangle icon is the front of the vehicle.
  • the rear position and the front position of the vehicles mentioned below are the same as those described here, and will not be repeated hereafter.
  • the parking position and the parking direction of the one or more parked vehicles are determined, according to the parking position and the parking position of the one or more parked vehicles direction, displaying one or more virtual vehicle models in the first user interface, the one or more virtual vehicle models corresponding to the one or more parked vehicles one by one.
  • a user's first operation on any one of the one or more virtual vehicle models is detected, and a second user interface is displayed in response to the user's first operation.
  • the user triggers a first operation on any one of the one or more virtual vehicle models.
  • the parked vehicle corresponding to any virtual vehicle model is determined as the reference vehicle, and the second user interface is displayed.
  • the user can directly operate the virtual vehicle models without providing a separate vehicle selection area, and it is not necessary for the user to confirm which operation identifier in the vehicle selection area corresponds to Which parked vehicle can improve the efficiency of determining the reference vehicle.
  • the first user interface further includes a virtual vehicle model corresponding to the target vehicle, and the first user interface further includes an icon for indicating the target vehicle.
  • the second user interface may also include a virtual vehicle model corresponding to the target vehicle, and the second user interface may also include an icon for indicating the target vehicle.
  • any user interface in the first user interface and the second user interface when the user interface includes a virtual vehicle model corresponding to the parked vehicle and a virtual vehicle model corresponding to the target vehicle, it can In the manner described above, an icon indicating the target vehicle is displayed in the user interface to distinguish the parked vehicle from the target vehicle.
  • the virtual vehicle model corresponding to the target vehicle is different from the virtual vehicle model corresponding to the parked vehicle.
  • the aforementioned virtual vehicle model may be a two-dimensional virtual vehicle model, or may be a three-dimensional virtual vehicle model.
  • the first user interface is as shown in Figure 4, including 3 virtual vehicle models in Figure 4, including the virtual vehicle model corresponding to the target vehicle and the virtual vehicle model corresponding to 2 parked vehicles in these 3 virtual vehicle models, and A triangle icon is displayed at the end of the virtual vehicle model corresponding to the target vehicle.
  • the user clicks on any virtual vehicle model in the virtual vehicle models corresponding to the two parked vehicles to determine the parked vehicle corresponding to any virtual vehicle model as the reference vehicle.
  • the user's first operation includes any one of the user's touch, tap and slide actions on the first user interface.
  • the user selects a reference vehicle by touching the virtual vehicle model, or taps the virtual vehicle model to select the reference vehicle, or slides the virtual vehicle model to select the reference vehicle.
  • the user selects the reference vehicle by touching the operation sign, or taps the operation sign to select the reference vehicle, or slides the operation sign to select the reference vehicle, which is not limited in this embodiment of the present application.
  • the second user interface only includes the reference vehicle, that is, the second user interface does not include other parked vehicles.
  • the second user interface includes not only the reference vehicle, but also other parked vehicles.
  • the second user interface further includes a second vehicle, and the second vehicle is any vehicle in the one or more parked vehicles except the reference vehicle, and the display manner of the reference vehicle is different from the display manner of the second vehicle.
  • the display color of the reference vehicle is different from that of other parked vehicles, or the outline of the reference vehicle has a different thickness than the outlines of other parked vehicles, or the background texture of the reference vehicle is different from that of other parked vehicles.
  • the background texture is different, etc.
  • the user can visually distinguish the reference vehicle included in the second user interface from other parked vehicles.
  • the second user interface further includes an indicator, which is used to indicate the reference vehicle.
  • FIG. 5 is a schematic diagram of a second user interface provided by an embodiment of the present application.
  • the second user interface includes a reference vehicle and a second vehicle, the reference vehicle is located on the right side, and the second vehicle is located on the left side, and the contour line of the reference vehicle in Fig. 5 is different in thickness from the contour line of the second vehicle,
  • the surrounding of the reference vehicle in the second user interface further includes an "L"-shaped indicator for indicating the reference vehicle.
  • the electronic device inputs the surround view image to the vehicle detection model to obtain the first vehicle's
  • the parking position and the partial image is the image area where the first vehicle is located in the surround view image. Afterwards, according to the following steps (1)-(2), the parking direction of the first vehicle is determined.
  • the parking information of the first vehicle is a partial image of the first vehicle
  • the partial image of the first vehicle is input to the key information detection model to obtain multiple key points of the first vehicle output by the key information detection model attribute information and attribute information of multiple keylines.
  • the attribute information of the key point includes at least one of the position of the key point, the category of the key point, and the visibility of the key point, and the visibility of the key point is used to indicate whether the corresponding key point is blocked.
  • the attribute information of the keyline includes at least one of the position of the central point of the keyline, the visibility of the keyline, the inclination of the keyline, and the length of the keyline. The visibility of the keyline is used to indicate whether the corresponding keyline is blocked.
  • the key points include the center points of the four wheels, the center point of the vehicle body, the center point of the car logo, the center points of the two taillights, and so on.
  • the key line of the first vehicle includes the vertical centerline at the position where the license plate is installed at the front and rear of the vehicle, the vertical centerline between the logo and the roof of the vehicle, and the like.
  • the four wheel center points and the body center point of the first vehicle are taken as the key points of the first vehicle, and the vertical centerlines at the front and rear license plate positions of the first vehicle are taken as the keylines of the first vehicle.
  • the center point of the logo of the first vehicle and the center points of the two taillights are regarded as the key points of the first vehicle, and the vertical center line of the logo and the roof of the first vehicle is regarded as the key point of the first vehicle Wire.
  • the key points include the center points of the four wheels of the vehicle and the center point of the vehicle body
  • the key lines include the vertical center lines at the positions where the license plate is installed at the front and rear of the vehicle.
  • the key information detection model can also output basic attribute information of the first vehicle, such as body size, model style, color, lamp status, and vehicle door status.
  • vehicle detection model and the key information detection model are obtained through pre-training, and the embodiment of the present application does not limit the structure of these two models, and the structure of these two models can be a neural network or other structures.
  • the attribute information of multiple key points and the attribute information of multiple keylines of the first vehicle are input into the pose estimation model, so as to obtain the image coordinate system of the first vehicle output by the pose estimation model in the local image
  • the parking direction of the first vehicle in the image coordinate system of the partial image is converted to the body coordinate system of the target vehicle to obtain the parking direction of the first vehicle.
  • the parking direction includes the heading of the vehicle and the direction of the vehicle body.
  • the parking direction output by the pose estimation model includes not only the heading of the vehicle and the direction of the vehicle body, but also the angle of the vehicle body. Since the external parameters of the vehicle-mounted surround-view camera will have a certain impact on the vehicle body angle, after the attitude estimation model outputs the vehicle body angle, it is necessary to perform external parameter compensation on the basis of the vehicle body angle.
  • the compensation angle is determined, and the compensation angle is the angle between the line connecting the focus of the vehicle-mounted surround-view camera and the center point of the first vehicle and the imaging plane of the vehicle-mounted surround-view camera.
  • the body angle output by the attitude estimation model is added to the compensation angle to obtain the body angle of the first vehicle in the image coordinate system of the partial image.
  • the parking direction of the first vehicle in the image coordinate system of the partial image is transformed into the body coordinate system of the target vehicle, so as to obtain the body direction of the first vehicle.
  • the pose estimation model is obtained through pre-training, and the embodiment of the present application does not limit the structure of the pose estimation model, and the structure of the pose estimation model may be a neural network or other structures.
  • the embodiment of the present application determines the parking direction of the vehicle through the attribute information of key points and key lines, and for the same vehicle, it is relatively easy to obtain attribute information of different key points and key lines through simulation data, CAD, etc. , and then a large number of samples can be obtained, and the key information detection model and attitude estimation model can be trained through these samples, which can improve the accuracy and robustness of determining the parking direction of the vehicle.
  • the embodiment of the present application can be based on multiple The image is looked around to determine the parking direction of the first vehicle. That is, multiple surround-view images are fused to determine the parking direction of the first vehicle.
  • partial images corresponding to the first vehicle are respectively determined from multiple surround-view images to obtain multiple partial images.
  • the multiple partial images are respectively input to the key information detection model to obtain attribute information of multiple key points and attribute information of multiple key lines of the first vehicle in each partial image.
  • the attribute information of multiple key points and attribute information of multiple keylines of the first vehicle in each partial image are respectively input into the pose estimation model to obtain multiple initial parking directions of the first vehicle output by the pose estimation model , the multiple initial parking directions are in one-to-one correspondence with the multiple partial images.
  • the multiple initial parking directions are averaged to obtain the parking direction of the first vehicle.
  • the confidence levels corresponding to the multiple initial parking directions are determined, and the multiple initial parking directions and the corresponding confidence levels are weighted and summed to obtain the parking direction of the first vehicle.
  • partial images corresponding to the first vehicle are respectively determined from multiple surround-view images to obtain multiple partial images.
  • the multiple partial images are input to the key information detection model to obtain attribute information of multiple key points and attribute information of multiple key lines of the first vehicle.
  • the attribute information of the multiple key points and the attribute information of the multiple key lines of the first vehicle are input into the attitude estimation model to obtain the parking direction of the first vehicle output by the attitude estimation model.
  • FIG. 7 is a flow chart of determining the parking direction of the first vehicle according to an embodiment of the present application.
  • the parking information of the first vehicle is input into the key information detection model to obtain attribute information of multiple key points and attribute information of multiple key lines of the first vehicle.
  • the attribute information of multiple key points and the attribute information of multiple key lines are input into the pose estimation model to obtain the parking direction of the first vehicle in the image coordinate system of the partial image
  • the parking direction includes the heading of the vehicle head, the direction of the body and body angle.
  • the compensation angle is added on the basis of the body angle, and the head orientation, body direction and compensated body angle of the first vehicle in the image coordinate system of the local image are converted to the body coordinate system of the target vehicle to obtain the first vehicle’s Parking direction.
  • the parking position and the parking direction of the first vehicle may also be determined in other ways.
  • the surround view image is used as the input of the vehicle detection and orientation estimation model to obtain the parking position of the first vehicle and the parking direction of the first vehicle in the image coordinate system of the surround view image
  • the parking direction includes the orientation of the vehicle head , body direction and body angle.
  • the compensation angle is added on the basis of the vehicle body angle, and the head orientation, vehicle body direction and the compensated vehicle body angle of the first vehicle in the image coordinate system of the surround view image are converted to the vehicle body coordinate system of the target vehicle to obtain the first vehicle’s Parking direction.
  • the parking position and the parking direction of the one or more parked vehicles are determined based on the parking information of the one or more parked vehicles. Based on the parking positions and parking directions of the one or more parked vehicles, a preset model is used to determine a reference vehicle.
  • the parking position and the parking direction of the one or more parked vehicles are determined.
  • a parking space is determined based on the parking positions of the one or more parked vehicles, and the parking space is an area in the parking area except the parking positions of the one or more parked vehicles. Determining the distance between the target vehicle and the parking space, and determining the direction of travel of the target vehicle, combining the distance between the target vehicle and the parking space, the direction of travel of the target vehicle, and the direction of travel of the one or more parked vehicles.
  • the parking orientation is input to the pre-set model to determine the reference vehicle.
  • the preset model is obtained by training based on multiple sample vehicles in advance, for example, by means of reinforcement learning.
  • the implementation process of determining the parking position and parking direction of the one or more parked vehicles refers to the relevant description in the above-mentioned first implementation mode, and will not be repeated here. .
  • an implementation manner of determining a parking space based on the parking positions of the one or more parked vehicles will be described below, and the discussion will not be discussed here.
  • the embodiment of the present application can not only determine the reference vehicle through a preset model, but also determine the reference vehicle according to the parking attitude rule. That is, based on the parking information of the one or more parked vehicles, the parking position and the parking direction of the one or more parked vehicles are determined. Based on the parking positions and parking orientations of the one or more parked vehicles, a reference vehicle is determined using a parking attitude rule.
  • the parking attitude rule refers to a rule for determining the reference vehicle according to the priority of the vehicle body direction.
  • the order of priority of the vehicle body direction from high to low is vertical direction, horizontal direction, and oblique direction. That is, if there is a parked vehicle whose body direction is in the vertical direction among the one or more parked vehicles, then the parked vehicle whose body direction is in the vertical direction is determined as the reference vehicle. If there is no parked vehicle whose body direction is vertical in the one or more parked vehicles, but there is a parked vehicle whose body direction is horizontal, then the parked vehicle whose body direction is horizontal is determined as the reference vehicle .
  • the body A parked vehicle with an oblique orientation is determined as the reference vehicle.
  • the priority order of the vehicle body directions is not limited to the above order, and may be other orders in other examples, which is not limited in this embodiment of the present application.
  • randomly select a vehicle as a reference vehicle or select a vehicle as a reference vehicle according to other rules, for example, select a vehicle closest to the target vehicle as a reference vehicle, this paper
  • select a vehicle closest to the target vehicle as a reference vehicle this paper
  • the embodiment of the application does not limit this.
  • the reference vehicle after determining the parking position and parking direction of one or more parked vehicles, the reference vehicle can be automatically determined by using the preset model or parking posture rules, which can avoid the user's manual A reference vehicle is selected, thereby simplifying the user's operation.
  • the parking attitude rule may also refer to a rule for determining the reference vehicle according to the number of occurrences of the vehicle body direction. For example, based on the vehicle body orientations of the one or more parked vehicles, the number of occurrences of each vehicle body orientation is counted, and a vehicle is selected as a reference vehicle from the parked vehicles with the vehicle body orientation with the most occurrences.
  • Step 203 Determine a target virtual parking space based on the parking information of the reference vehicle, where the target virtual parking space is used to indicate the parking position and parking direction of the target vehicle.
  • the parking direction of the target vehicle includes the head orientation and the direction of the vehicle body of the target vehicle.
  • the body direction of the target vehicle is the direction of the body of the target vehicle relative to a reference object, and the reference object includes a road baseline, a reference vehicle or other reference objects.
  • the direction of the body of the target vehicle is parallel, perpendicular, or inclined to the body of the reference vehicle.
  • the target virtual parking space is determined according to the following steps (1)-(3).
  • the ground area in the look-around image is extracted, the feature of each pixel in the plurality of pixels included in the ground area is extracted, and based on the features of the plurality of pixels, the plurality of pixels are clustered, A plurality of areas are obtained, a parking area is determined from the plurality of areas, and a parking space in the parking area is determined based on the parking information of the one or more parked vehicles.
  • the surround view image is used as the input of the ground segmentation model to obtain the ground area output by the ground segmentation model.
  • the ground area is used as an input of the feature extraction model to obtain features of a plurality of pixel points included in the ground area output by the feature extraction model.
  • the multiple pixel points are clustered to obtain multiple regions. Determine the regional characteristics corresponding to each of the multiple regions, and determine the semantic category of each of the multiple regions based on the regional characteristics of the multiple regions.
  • the area whose semantic category is the parking category is determined as the parking area, and based on the parking information of the one or more parked vehicles, from the parking Parking spaces are determined in the area. If there is no area whose semantic category is the parking category in the plurality of areas, then based on the area characteristics and semantic categories of the plurality of areas, determine the parking area from the plurality of areas, and based on the one or more parked vehicles The available parking information is determined from the parking area.
  • ground segmentation model and the feature extraction model are trained in advance, and the embodiment of the present application does not limit the structure of these two models.
  • the structure of these two models can be a neural network or other structures.
  • the ground area includes parking area, road area, manhole cover area, lawn area and so on. Clustering the multiple pixel points based on the features of the multiple pixel points refers to dividing the pixel points with similar distances between the features into one region, thereby obtaining multiple regions.
  • the features of all the pixels included in the region are averaged to obtain the region characteristics corresponding to the region.
  • the features of all the pixels included in the area are fused to obtain the area features corresponding to the area, for example, the features of all the pixels included in the area are combined into a matrix, and the matrix is used as the area feature of the area.
  • the realization process of determining the semantic category of each region in the plurality of regions based on the region characteristics of the plurality of regions includes: for each region in the plurality of regions, determining the region characteristic corresponding to the region and the stored region characteristic and For the distances between the regional features in the correspondence between the semantic categories, the semantic category corresponding to the regional feature with the closest distance between the regional features corresponding to the region is determined as the semantic category of the region.
  • the embodiment of the present application can perform multi-frame fusion through multiple surround-view images. That is, for the plurality of surround-view images, the ground area in each surround-view image is determined according to the above method, so as to obtain multiple ground areas. Then, the overlapping areas among the plurality of ground areas are acquired, and then, the features of each pixel in the overlapping areas are extracted according to the above method, and clustering is performed to determine the parking space.
  • FIG. 9 is a flow chart of determining the semantic category of each region provided by the embodiment of the present application.
  • the surround view image is used as the input of the ground segmentation model to obtain the ground area output by the ground segmentation model.
  • the ground area is used as the input of the feature extraction model to obtain the features of multiple pixel points output by the feature extraction model.
  • a plurality of regions are obtained by performing feature clustering on the plurality of pixel points. Obtain the regional features of each of the multiple regions, perform feature matching on the regional features corresponding to each of the multiple regions and the regional features in the stored correspondence between regional features and semantic categories, to determine the Semantic categories for each of the multiple regions.
  • the above determination of the semantic category of each region is only an example, and in practical applications, the semantic category of each region may also be determined in other ways.
  • the look-around image is used as the input of the semantic segmentation model to obtain the semantic category of each region in the look-around image output by the semantic segmentation model.
  • the area included in the look-around image includes multiple areas divided by the above-mentioned ground area.
  • a parking area is determined from the plurality of areas based on the area characteristics and semantic categories of the plurality of areas, and a parking area is determined from the parking area based on the parking information of the one or more parked vehicles.
  • the realization process of the parking space includes: based on the semantic categories of the multiple areas, selecting an area whose semantic category is the road category but whose area features are farthest from the road features from the multiple areas, based on the one or more parked vehicles Parking information, determine the parking space in the selected area, if the parking space in the selected area is enough to park the target vehicle, then determine the selected area as the parking area, if the selected area If the parking space in the center is not enough to park the target vehicle, then based on the semantic categories of the multiple areas, select an area whose area features are farthest from the road features from the remaining areas whose semantic class is the road type, and return the determined selected.
  • the step of determining the parking space in the area until a parking area sufficient to park the target vehicle is determined. If there is no parking area
  • the implementation process of determining the parking space from the parking area includes: masking the position of the one or more parked vehicles in the parking area to obtain a first
  • the parking area is to detect the obstacles in the first parking area, and cover the area occupied by the obstacles in the first parking area, so as to obtain the second parking area.
  • Regular quadrilateral processing is performed on the second parking area to obtain the third parking area, and the space where the third parking area is located is determined as the parking space.
  • the one or more parked vehicles are projected into the parking area, and the projection area of the one or more parked vehicles in the parking area is Mask to get the first berth area. Obstacles in the first berthing area are detected, the obstacles are projected to the first berthing area, and the projected areas of the obstacles in the first berthing area are covered to obtain a second berthing area.
  • a plurality of candidate virtual parking spaces are determined based on the parking direction of the reference vehicle and a parking space, and a target virtual parking space is determined from the plurality of candidate virtual parking spaces in response to a second user operation.
  • a plurality of candidate virtual parking spaces are determined, and a fourth user interface is displayed, where the fourth user interface includes the plurality of candidate virtual parking spaces.
  • a third user interface is displayed, and the third user interface includes the target virtual parking space.
  • the fourth user interface is displayed.
  • the user triggers a second operation on the fourth user interface to determine a target virtual parking space from the plurality of candidate virtual parking spaces.
  • the third user interface further displays a parking space, and the target virtual parking space is located in the parking space.
  • the implementation process of determining multiple candidate virtual parking spaces includes: taking the parking direction of the reference vehicle as the parking direction of the target vehicle, and determining multiple candidate virtual parking spaces in the parking space , so that the parking direction indicated by the plurality of candidate virtual parking spaces is the parking direction of the target vehicle. That is, the parking direction of the reference vehicle is directly used as the parking direction of the target vehicle, and then multiple candidate virtual parking spaces are determined in the parking space.
  • the user may be dissatisfied with the parking direction of the reference vehicle, so the electronic device displays a second user interface, the second user interface includes the reference vehicle, the second user interface can also indicate the parking direction of the reference vehicle, and the parking direction of the reference vehicle is used as Refer to the parking direction.
  • the third operation is used to adjust the reference parking direction, and determine the adjusted parking direction as the parking direction of the target vehicle. Based on the parking direction of the target vehicle, a plurality of candidate virtual parking spaces are determined in the parking space, so that the parking direction indicated by the plurality of candidate virtual parking spaces is the parking direction of the target vehicle.
  • determining multiple candidate virtual parking spaces in the parking space for example, arranging multiple candidate virtual parking spaces side by side from a side close to the reference vehicle in the parking space.
  • multiple candidate virtual parking spaces are arranged side by side from right to left in the parking space.
  • multiple candidate virtual parking spaces are arranged side by side from left to right in the parking space.
  • the user's second operation includes any one of the user's touch, tap and slide actions on the fourth user interface.
  • the user determines the target virtual parking space by touching the candidate virtual parking space, or taps the candidate virtual parking space to determine the target virtual parking space, or slides the candidate virtual parking space to determine the target virtual parking space, which is not limited in this embodiment of the present application.
  • the third operation of the user includes any one of clicking and dragging actions of the user on the second user interface.
  • the user adjusts the parking direction by clicking on the candidate virtual parking space, or dragging the candidate virtual parking space to adjust the parking direction, which is not limited in this embodiment of the present application.
  • the second user interface displayed by the electronic device is shown in the left figure in Figure 11.
  • the second user interface includes three vehicles, namely the target vehicle, the reference vehicle and other parked vehicles, and a triangle is displayed near the rear of the target vehicle. icon, an "L"-shaped indicator is displayed around the reference vehicle, and the second user interface also includes the parking direction of the reference vehicle. If the user is not satisfied with the parking direction of the reference vehicle, the parking direction of the reference vehicle is used as the reference parking direction, and the user can adjust the reference parking direction according to the direction of the arrow in the left figure of Figure 11, and the adjusted parking direction The direction is determined as the parking direction of the target vehicle. Afterwards, a plurality of candidate virtual parking spaces are determined according to the above method, and the parking direction of the target virtual parking space (virtual vehicle model with 1) selected from the plurality of candidate virtual parking spaces is shown in the right figure in FIG. 11 .
  • the electronic device can determine multiple candidate virtual parking spaces in various ways.
  • the left figure of Figure 12 includes three vehicles, which are the target vehicle, the reference vehicle and other parked vehicles, and the rear of the target vehicle A triangle icon is displayed nearby, and an "L"-shaped indicator is displayed around the reference vehicle.
  • 5 virtual vehicle models are arranged side by side.
  • 5 virtual vehicle models and 5 candidate One-to-one correspondence between the virtual parking spaces as shown in the right diagram of Figure 12.
  • five candidate virtual parking spaces black rectangular frames
  • the finally determined target virtual parking spaces will also be different.
  • the final determined target virtual parking space black rectangular frame
  • Figure 15 Show when the determined reference vehicle is the vehicle in the lower right corner (the "L" type indicator is displayed around the vehicle), the final determined target virtual parking space (black rectangular frame) is shown in Figure 15 Show.
  • the multiple parked vehicles may be distributed on one side of the driving road of the target vehicle, or may be distributed on both sides of the driving road of the target vehicle.
  • the target virtual vehicle is determined based on the parking information of reference vehicles on both sides of the driving road of the target vehicle. That is, according to the above method, one reference vehicle can be respectively determined from the parked vehicles on both sides of the driving road of the target vehicle.
  • a parking space is respectively determined from both sides of the driving road according to the above method, and then based on the reference vehicles on both sides of the driving road of the target vehicle, how many parking spaces are respectively determined in the parking spaces on both sides of the driving road.
  • the vehicle in the middle of the left figure in Figure 16 is the target vehicle
  • the other two vehicles are reference vehicles ("L" type indicators are displayed around the vehicle)
  • these two reference vehicles are the reference vehicle on the left side of the driving road of the target vehicle, and the reference vehicle on the right side of the driving road of the target vehicle, respectively.
  • L type indicators are displayed around the vehicle
  • these two reference vehicles are the reference vehicle on the left side of the driving road of the target vehicle
  • the reference vehicle on the right side of the driving road of the target vehicle respectively.
  • two virtual vehicle models are arranged side by side. These two virtual vehicle models correspond to the two candidate virtual parking spaces one by one, and from In the parking space on the left side of the driving road, three virtual vehicle models are arranged side by side near the reference vehicle on the left side of the driving road.
  • These three virtual vehicle models correspond to the three candidate virtual parking spaces one by one.
  • the virtual vehicle model is shown in the right diagram of Figure 16.
  • two candidate virtual parking spaces black rectangles
  • three candidate virtual parking spaces are arranged side by side from the side of the reference vehicle on the right side of the driving road in the parking space on the right side of the driving road.
  • three candidate virtual parking spaces are arranged side by side near the reference vehicle on the left side of the driving road.
  • the corresponding vehicle head orientations of the plurality of candidate virtual parking spaces can also be displayed. For example, as shown in FIG. 18 , an arrow is displayed in each candidate virtual parking space, and the arrow is used to indicate the direction of the vehicle head.
  • candidate virtual parking spaces are determined based on the parking direction of the reference vehicle and the parking space available. If the number of candidate virtual parking spaces is one, directly use the candidate virtual parking spaces as the target virtual parking spaces. If there are multiple candidate virtual parking spaces, one candidate virtual parking space is selected from the plurality of candidate virtual parking spaces as the target virtual parking space.
  • the manner of determining the candidate virtual parking spaces based on the parking direction and the parking space of the reference vehicle refers to the above-mentioned first implementation manner, which will not be repeated here.
  • a candidate virtual parking space is selected from the plurality of candidate virtual parking spaces as a target virtual parking space and recommended to the user.
  • the multiple candidate virtual parking spaces are recommended to the user, and the user selects one candidate virtual parking space as the target virtual parking space.
  • the distance between the current position of the target vehicle and the candidate virtual parking space can be combined to select a distance from the multiple candidate virtual parking spaces
  • the nearest candidate virtual parking space is recommended to the user as the target virtual parking space, and of course a candidate virtual parking space can be selected and recommended to the user in other ways.
  • a fifth user interface is displayed, and the fifth user interface includes recommended virtual parking spaces.
  • a third user interface is displayed, and the fourth operation is used to instruct the user to confirm that the recommended virtual parking space is used as the target virtual parking space.
  • a fifth user interface is displayed, and the fifth user interface includes recommended virtual parking spaces.
  • a fourth user interface is displayed, the fourth user interface includes the plurality of candidate virtual parking spaces, and the fifth operation is used to indicate that the user is not satisfied with the recommended parking position of the virtual parking spaces.
  • a third user interface is displayed, and the second operation is used to select a target virtual parking space from the plurality of candidate virtual parking spaces.
  • a candidate virtual parking space when a candidate virtual parking space is selected from a plurality of candidate virtual parking spaces and recommended to the user as a target virtual parking space, the user may directly accept the recommended virtual parking space, and the recommended virtual parking space will be used as the target virtual parking space.
  • the user may be dissatisfied with the parking position of the recommended virtual parking space. In this case, all the candidate virtual parking spaces need to be recommended to the user, and the user selects a candidate virtual parking space as the target virtual parking space.
  • the user's fourth operation includes any one of the user's touch and tap actions on the fifth user interface.
  • the fifth user interface includes a "confirm” button, and the user determines the recommended virtual parking space as the target virtual parking space by touching the "confirm” button, which is not limited in this embodiment of the present application.
  • the fifth operation of the user includes any one of the user's touch and tap actions on the fifth user interface.
  • the fifth user interface includes a "cancel” button, and the user indicates that he is not satisfied with the currently recommended virtual parking space by touching the "cancel” button, which is not limited in this embodiment of the present application.
  • the fourth user interface displayed by the electronic device further includes an icon for indicating the recommended virtual parking space.
  • the electronic device may display the virtual parking space shown in the left figure of Figure 19.
  • the fourth user interface displayed in the fourth user interface, and the recommended virtual parking space is marked by the icon of a five-pointed star in the fourth user interface.
  • the user may reselect a candidate virtual parking space as the target virtual parking space from the fourth user interface.
  • the second user interface further includes a parking space, and in response to the user's sixth operation, the sixth operation is used to select a location from the parking space as the parking location of the target vehicle. Based on the parking direction of the reference vehicle and the parking position of the target vehicle, the target virtual parking space is determined.
  • the user selects a location in the parking space as the parking location of the target vehicle, and then determines the target virtual parking space based on the parking direction of the reference vehicle and the parking location of the target vehicle.
  • the sixth operation of the user includes any one of the user's touch, tap, and drag actions on the second user interface.
  • the user selects the parking position of the target vehicle by touching in the parking space of the second user interface, or the user taps in the parking space of the second user interface to select the parking position of the target vehicle, or the user selects the parking position of the target vehicle in the parking space of the second user interface.
  • the parking position of the target vehicle is selected by dragging other markers, which are reference vehicles or other vehicles.
  • the parking direction of the reference vehicle can be directly used as the parking direction of the target vehicle, and the target virtual parking space can be determined at the parking position of the target vehicle in the parking space , so that the parking direction indicated by the target virtual parking space is the parking direction of the target vehicle.
  • the electronic device displays a second user interface, the second user interface includes the reference vehicle, the second user interface can also indicate the parking direction of the reference vehicle, and the parking direction of the reference vehicle is used as Refer to the parking direction.
  • the third operation is used to adjust the reference parking direction, and determine the adjusted parking direction as the parking direction of the target vehicle.
  • the target virtual parking space is determined at the parking position of the target vehicle in the parking space, so that the parking direction indicated by the target virtual parking space is the parking direction of the target vehicle.
  • the second user interface shown in Figure 20 the shaded area in Figure 20 represents the parking space, in the left picture of Figure 20, the user first taps the reference vehicle in the second user interface, and then taps Click on the parking space, so that the electronic device can display the parking position of the target vehicle (black rectangular frame in FIG. 20 ) in the right diagram of FIG. 20 .
  • the shaded area in FIG. 21 represents a parking space.
  • the above content is the implementation process of determining the target virtual parking space when there are parked vehicles around the target vehicle. In some cases, there may be no parked vehicles around the target vehicle.
  • the electronic device performs three-dimensional space measurement on the parking space to determine the depth of the parking space. Then, based on the ratio between the depth of the parking space and the body length of the target vehicle, the parking direction of the target vehicle is determined. And determine the parking position of the target vehicle in the parking space, and then determine the target virtual parking space.
  • the ratio between the depth of the parking space and the body length of the target vehicle is greater than a first ratio threshold, it is determined that the direction of the body of the target vehicle is a vertical direction relative to the road baseline. If the ratio between the depth of the parking space and the body length of the target vehicle is smaller than the second ratio threshold, it is determined that the direction of the body of the target vehicle is horizontal relative to the road baseline. If the ratio between the depth of the parking space and the body length of the target vehicle is less than the first ratio threshold but greater than the second ratio threshold, it is determined that the body direction of the target vehicle is oblique relative to the road baseline, and its oblique angle is The arcsine of the depth of the vehicle space and the length of the target vehicle body.
  • the first ratio threshold and the second ratio threshold are preset and can be adjusted according to different requirements.
  • the first ratio threshold is 0.9
  • the second ratio threshold is 0.7.
  • the above method when determining the parking position of the target vehicle in the parking space, the above method can be referred to, and the user selects a position in the parking space as the parking position of the target vehicle.
  • the electronic device can also refer to the above method to determine multiple candidate virtual parking spaces in the parking space according to the parking direction of the target vehicle, and the user selects a candidate virtual parking space as the target virtual parking space.
  • the method for the user to select a location in the parking space as the parking location of the target vehicle, and the method for the user to select a candidate virtual parking space as the target virtual parking space from multiple candidate virtual parking spaces refer to the previous description, and will not be repeated here. .
  • a vehicle is selected from one or more parked vehicles around the target vehicle as a reference vehicle, and the target virtual parking space is determined based on the parking direction of the reference vehicle, which can ensure that the target vehicle is automatically parked based on the virtual parking space
  • the rear of the car forms a consistent arrangement with the selected reference vehicle, thereby improving the orderliness and convenience of parking.
  • the parking direction of the first vehicle can be accurately obtained.
  • multiple areas can be determined through the feature clustering of pixels included in the ground area, and uncommon parking areas can be identified according to the semantic categories of the multiple areas, thereby Improve the effect of parking area recognition.
  • FIG. 22 is a flow chart of a display method for assisting parking provided by an embodiment of the present application, and the method can be applied to the above-mentioned electronic device. Please refer to FIG. 22 , the method includes the following steps.
  • Step 2201 Display the first user interface, the first user interface is used to display the environment information around the target vehicle, the target vehicle is a vehicle to be parked, and the environment information around the target vehicle includes parking information of one or more parked vehicles.
  • the first user interface includes one or more operation identifiers, and the one or more operation identifiers are in one-to-one correspondence with the one or more parked vehicles.
  • the environment information displayed on the first user interface is image information acquired by a camera or radar.
  • the environment information displayed on the first user interface is virtual environment information generated according to information acquired by the sensor.
  • step 2201 For the relevant content of step 2201, please refer to the relevant description in step 202, which will not be repeated here.
  • Step 2202 In response to the user's first operation, display a second user interface, the second user interface includes a reference vehicle, and the reference vehicle is one of the one or more parked vehicles.
  • the user's first operation includes any one of the user's touch, tap and slide actions on the first user interface.
  • the second user interface further includes a second vehicle, the second vehicle being any of the one or more parked vehicles other than the reference vehicle, the reference vehicle being displayed differently from the second vehicle.
  • the display color of the reference vehicle is different from that of the second vehicle, or the outline of the reference vehicle is different from that of the second vehicle, or the background texture of the reference vehicle is different from that of the second vehicle.
  • the second user interface further includes an indication mark, and the indication mark is used to indicate the reference vehicle.
  • step 2202 for the relevant content of step 2202, please refer to the relevant description in step 202, and details are not repeated here.
  • Step 2203 Display a third user interface, the third user interface includes a target virtual parking space, and the target virtual parking space is used to indicate the parking position and parking direction of the target vehicle.
  • a fourth user interface is displayed, and the fourth user interface includes a plurality of candidate virtual parking spaces.
  • a third user interface is displayed, and the target virtual parking space is one of the plurality of candidate virtual parking spaces.
  • the third user interface also includes an icon indicating the target vehicle.
  • the third user interface further displays a parking space, and the target virtual parking space is located in the parking space.
  • step 2203 for the relevant content of step 2203, please refer to the relevant description in step 203, and details will not be repeated here.
  • the environment information around the target vehicle is displayed, so that the user can determine the reference vehicle by operating the virtual vehicle model or the operation sign, and determine the target virtual parking space by referring to the parking information of the vehicle, which can ensure that the target vehicle is based on the virtual parking space. After automatic parking, it forms a consistent alignment with the reference vehicle.
  • multiple candidate virtual parking spaces can be displayed so that the user can select a satisfactory target virtual parking space, thereby satisfying the individual needs of the user.
  • Fig. 23 is a schematic structural diagram of a device for determining a virtual parking space provided by an embodiment of the present application.
  • the device can be implemented as part or all of an electronic device by software, hardware or a combination of the two.
  • the electronic device can be as shown in the above-mentioned Fig. 1 electronic equipment.
  • the device includes: an environment information acquisition module 2301 , a reference vehicle determination module 2302 and a virtual parking space determination module 2303 .
  • the environment information acquisition module 2301 is used to acquire the environment information around the target vehicle, the target vehicle is a vehicle to be parked, and the environment information includes the parking information of one or more parked vehicles;
  • a reference vehicle determining module 2302 configured to determine a reference vehicle based on the parking information of one or more parked vehicles, where the reference vehicle is one of the one or more parked vehicles;
  • a virtual parking space determination module 2303 configured to determine a target virtual parking space based on the parking information of the reference vehicle, where the target virtual parking space is used to indicate the parking position and parking direction of the target vehicle.
  • the reference vehicle determination module 2302 includes:
  • the first interface display submodule is used to display the first user interface, the first user interface includes the parking position and parking direction of one or more parked vehicles, and the parking position and parking direction of one or more parked vehicles are based on one or more Parking information determination of multiple parked vehicles;
  • the second interface display sub-module is configured to display a second user interface in response to a first user operation, the second user interface includes a reference vehicle, and the first operation is used to instruct to select a reference vehicle from one or more parked vehicles.
  • the reference vehicle determination module 2302 includes:
  • a parking information determination submodule configured to determine the parking position and parking direction of one or more parked vehicles based on the parking information of one or more parked vehicles;
  • the reference vehicle determination sub-module is configured to determine the reference vehicle by using a preset model based on the parking positions and parking directions of one or more parked vehicles.
  • the virtual parking space determination module 2303 includes:
  • the parking direction determination submodule is used to determine the parking direction of the reference vehicle based on the parking information of the reference vehicle;
  • a parking space determination submodule configured to determine a parking space based on the parking information of one or more parked vehicles
  • the virtual parking space determination sub-module is used to determine the target virtual parking space based on the parking direction of the reference vehicle and the parking space.
  • the virtual parking space determination submodule is specifically used for:
  • a target virtual parking space is determined from a plurality of candidate virtual parking spaces.
  • the first vehicle is any vehicle in the one or more parked vehicles, and the parking information determination submodule is specifically used for:
  • the attribute information of the multiple key points and the attribute information of the multiple key lines of the first vehicle are input into the attitude estimation model to determine the parking direction of the first vehicle.
  • the attribute information of the key point includes at least one of the key point position, key point category and key point visibility, and the key point visibility is used to indicate whether the corresponding key point is blocked;
  • the attribute information of the key line includes the key line At least one of the position of the center point, the visibility of the keyline, the inclination of the keyline, and the length of the keyline. The visibility of the keyline is used to indicate whether the corresponding keyline is blocked.
  • the environment information around the target vehicle includes at least one of visual data and radar data.
  • the parking direction of the target vehicle includes a head orientation of the target vehicle and a body direction of the target vehicle, and the body direction of the target vehicle is a direction of the body of the target vehicle relative to the body of the reference vehicle.
  • the body direction of the target vehicle includes being parallel to, perpendicular to, or inclined to the body of the reference vehicle.
  • the environment information includes parking information of a plurality of parked vehicles, the plurality of parked vehicles are distributed on both sides of the driving road of the target vehicle, and the target virtual parking space is determined based on the parking information of reference vehicles on both sides of the driving road.
  • a vehicle is selected from one or more parked vehicles around the target vehicle as a reference vehicle, and the target virtual parking space is determined based on the parking direction of the reference vehicle, which can ensure that the target vehicle is automatically parked based on the virtual parking space
  • the rear of the car forms a consistent arrangement with the selected reference vehicle, thereby improving the orderliness and convenience of parking.
  • the parking direction of the first vehicle can be accurately obtained.
  • multiple areas can be determined through the feature clustering of pixels included in the ground area, and uncommon parking areas can be identified according to the semantic categories of the multiple areas, thereby Improve the effect of parking area recognition.
  • the virtual parking space determination device determines the virtual parking space, it only uses the division of the above-mentioned functional modules as an example. In practical applications, the above-mentioned function allocation can be completed by different functional modules according to needs. , that is, the internal structure of the device is divided into different functional modules to complete all or part of the functions described above.
  • the device for determining a virtual parking space provided by the above embodiment and the embodiment of the method for determining a virtual parking space belong to the same concept, and its specific implementation process is detailed in the method embodiment, and will not be repeated here.
  • Fig. 24 is a schematic structural diagram of a display device for assisting parking provided by an embodiment of the present application.
  • the device can be implemented by software, hardware or a combination of the two to become part or all of the electronic equipment.
  • the electronic equipment can be the above-mentioned The electronic device shown in Figure 1.
  • the device includes: a first interface display module 2401 , a second interface display module 2402 and a third interface display module 2403 .
  • the first interface display module 2401 is used to display the first user interface, the first user interface is used to display the environment information around the target vehicle, the target vehicle is a vehicle to be parked, and the environment information includes parking information of one or more parked vehicles ;
  • the second interface display module 2402 is configured to display a second user interface in response to the user's first operation, the second user interface includes a reference vehicle, and the reference vehicle is one of the one or more parked vehicles;
  • the third interface display module 2403 is configured to display a third user interface, the third user interface includes a target virtual parking space, and the target virtual parking space is used to indicate the parking position and parking direction of the target vehicle.
  • the second user interface further includes a second vehicle, and the second vehicle is any vehicle in the one or more parked vehicles except the reference vehicle;
  • the reference vehicle is displayed differently than the second vehicle.
  • the second user interface further includes an indication mark, which is used to indicate the reference vehicle.
  • the third interface display module is specifically used for:
  • the fourth user interface includes a plurality of candidate virtual parking spaces
  • a third user interface is displayed, and the target virtual parking space is one of multiple candidate virtual parking spaces.
  • the third user interface further displays a parking space, and the target virtual parking space is located in the parking space.
  • the first user interface includes one or more operation identifiers, and the one or more operation identifiers are in one-to-one correspondence with the one or more parked vehicles.
  • the environmental information displayed on the first user interface is image information acquired by a camera or radar.
  • the environment information displayed on the first user interface is virtual environment information generated according to information acquired by the sensor.
  • the third user interface further includes an icon for indicating the target vehicle.
  • the user's first operation includes any one of the user's touch, tap and slide actions on the first user interface.
  • the environment information around the target vehicle is displayed, so that the user can determine the reference vehicle by operating the virtual vehicle model or the operation sign, and determine the target virtual parking space by referring to the parking information of the vehicle, which can ensure that the target vehicle is based on the virtual parking space. After automatic parking, it forms a consistent alignment with the reference vehicle.
  • the multiple candidate virtual parking spaces may be displayed, so that the user can select a satisfactory target virtual parking space, thereby satisfying the user's individual needs.
  • FIG. 25 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the electronic device may be the electronic device 101 shown in FIG. 1 .
  • the electronic device includes at least one processor 2501 , a communication bus 2502 , a memory 2503 and at least one communication interface 2504 .
  • the processor 2501 may be a general-purpose central processing unit (central processing unit, CPU), a network processor (network processor, NP), a microprocessor, or may be one or more integrated circuits for implementing the solution of the present application, such as , application-specific integrated circuit (ASIC), programmable logic device (programmable logic device, PLD) or a combination thereof.
  • ASIC application-specific integrated circuit
  • PLD programmable logic device
  • the aforementioned PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), a general array logic (generic array logic, GAL) or any combination thereof.
  • the communication bus 2502 is used to transfer information between the above-mentioned components.
  • the communication bus 2502 can be divided into address bus, data bus, control bus and so on. For ease of representation, only one thick line is used in the figure, but it does not mean that there is only one bus or one type of bus.
  • Memory 2503 can be read-only memory (read-only memory, ROM), also can be random access memory (random access memory, RAM), also can be electrically erasable programmable read-only memory (electrically erasable programmable read-only Memory , EEPROM), optical discs (including compact disc read-only memory, CD-ROM), compact discs, laser discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or capable of Any other medium for carrying or storing desired program code in the form of instructions or data structures and capable of being accessed by a computer, but not limited thereto.
  • the memory 2503 may exist independently, and is connected to the processor 2501 through the communication bus 2502 .
  • the memory 2503 can also be integrated with the processor 2501.
  • the Communication interface 2504 uses any transceiver-like device for communicating with other devices or a communication network.
  • the communication interface 2504 includes a wired communication interface, and may also include a wireless communication interface.
  • the wired communication interface may be an Ethernet interface, for example.
  • the Ethernet interface may be an optical interface, an electrical interface, or a combination thereof.
  • the wireless communication interface may be a wireless local area network (wireless local area networks, WLAN) interface, a cellular network communication interface, or a combination thereof.
  • the processor 2501 may include one or more CPUs, such as CPU0 and CPU1 shown in FIG. 25 .
  • an electronic device may include multiple processors, such as processor 2501 and processor 2505 as shown in FIG. 25 .
  • processors can be a single-core processor or a multi-core processor.
  • a processor herein may refer to one or more devices, circuits, and/or processing cores for processing data such as computer program instructions.
  • the electronic device may further include an output device 2506 and an input device 2507 .
  • Output device 2506 is in communication with processor 2501 and can display information in a variety of ways.
  • the output device 2506 may be a liquid crystal display (liquid crystal display, LCD), a light emitting diode (light emitting diode, LED) display device, a cathode ray tube (cathode ray tube, CRT) display device, or a projector (projector), etc.
  • the input device 2507 communicates with the processor 2501 and can receive user input in various ways.
  • the input device 2507 may be a mouse, a keyboard, a touch screen device, or a sensing device, among others.
  • the memory 2503 is used to store the program code 2510 for implementing the solution of the present application, and the processor 2501 can execute the program code 2510 stored in the memory 2503 .
  • the program code 2510 may include one or more software modules, and the electronic device may implement the methods provided in the above embodiments through the processor 2501 and the program code 2510 in the memory 2503 .
  • FIG. 26 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device may be the above-mentioned electronic device.
  • the terminal device includes a sensor unit 1110 , a calculation unit 1120 , a storage unit 1140 and an interaction unit 1130 .
  • Sensor unit 1110 usually including visual sensors (such as cameras), depth sensors, IMUs, laser sensors, etc.;
  • Computing unit 1120 usually including CPU, GPU, cache, registers, etc., is mainly used to run the operating system;
  • the storage unit 1140 mainly includes internal memory and external storage, and is mainly used for reading and writing local and temporary data of the user;
  • the interaction unit 1130 mainly includes a display screen, a touch panel, a speaker, a microphone, etc., and is mainly used for interacting with the user, obtaining input for use, and implementing presentation algorithm effects, etc.
  • amblyopia training images can be displayed and projected to the amblyopia training device.
  • FIG. 27 is a schematic structural diagram of a terminal device provided by an embodiment of the present application.
  • the terminal device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, and a battery 142 , antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193 , a display screen 194, and a subscriber identification module (subscriber identification module, SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyroscope sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light Sensor 180L etc.
  • the structure shown in the embodiment of the present application does not constitute a specific limitation on the terminal device 100 .
  • the terminal device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the processor 110 may execute a computer program to implement any amblyopia training method in the embodiments of the present application.
  • the controller may be the nerve center and command center of the terminal device 100 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory, avoiding repeated access, reducing the waiting time of the processor 110, and thus improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I1C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I1S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
  • I1C integrated circuit
  • I1S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationship between the modules shown in the embodiment of the present application is only a schematic illustration, and does not constitute a structural limitation of the terminal device 100 .
  • the terminal device 100 may also adopt different interface connection modes in the foregoing embodiments, or a combination of multiple interface connection modes.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 can receive charging input from the wired charger through the USB interface 130 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the external memory, the display screen 194 , the camera 193 , and the wireless communication module 160 .
  • the wireless communication function of the terminal device 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
  • the terminal device 100 may use a wireless communication function to communicate with other devices.
  • the terminal device 100 may communicate with the second electronic device, the terminal device 100 establishes a screen projection connection with the second electronic device, the terminal device 100 outputs screen projection data to the second electronic device, and so on.
  • the screen projection data output by the terminal device 100 may be audio and video data.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the terminal device 100 can be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide wireless communication solutions including 1G/3G/4G/5G applied on the terminal device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modem processor, and convert it into electromagnetic waves through the antenna 2 to radiate out.
  • at least part of the functional modules of the mobile communication module 150 may be set in the processor 110 .
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be set in the same device.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs sound signals through audio equipment (not limited to speaker 170A, receiver 170B, etc.), or displays images or videos through display screen 194 .
  • the modem processor may be a stand-alone device.
  • the modem processor may be independent from the processor 110, and be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless Fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite, etc. System (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives electromagnetic waves via the antenna 1 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 110 .
  • the wireless communication module 160 can also receive the signal to be sent from the processor 110 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 for radiation.
  • the antenna 1 of the terminal device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the terminal device 100 can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA broadband Code division multiple access
  • WCDMA wideband code division multiple access
  • time division code division multiple access time-division code division multiple access
  • TD-SCDMA time-division code division multiple access
  • the GNSS may include a global positioning system (global positioning system, GPS), a global navigation satellite system (global navigation satellite system, GLONASS), a Beidou navigation satellite system (beidou navigation satellite system, BDS), a quasi-zenith satellite system (quasi -zenith satellite system (QZSS) and/or satellite based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou navigation satellite system beidou navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation systems
  • the terminal device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the terminal device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the display screen 194 may be used to display various interfaces output by the system of the terminal device 100 .
  • the terminal device 100 can realize the shooting function through the ISP, the camera 193 , the video codec, the GPU, the display screen 194 and the application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the terminal device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals.
  • Video codecs are used to compress or decompress digital video.
  • the terminal device 100 may support one or more video codecs.
  • the terminal device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG1, MPEG3, MPEG4, etc.
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can quickly process input information and continuously learn by itself.
  • Applications such as intelligent cognition of the terminal device 100 can be implemented through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the terminal device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the terminal device 100 by executing instructions stored in the internal memory 121 .
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the storage program area may store an operating system, at least one application program required by a function (such as the indoor positioning method in the embodiment of the present application, etc.) and the like.
  • the storage data area can store data created during the use of the terminal device 100 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the terminal device 100 may implement an audio function through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, and an application processor.
  • an audio module 170 can be used to play the sound corresponding to the video. For example, when the display screen 194 displays a video playing picture, the audio module 170 outputs the sound of the video playing.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • Speaker 170A also referred to as a "horn" is used to convert audio electrical signals into sound signals.
  • Receiver 170B also called “earpiece” is used to convert audio electrical signals into sound signals.
  • the microphone 170C also called “microphone” or “microphone”, is used to convert sound signals into electrical signals.
  • the earphone interface 170D is used for connecting wired earphones.
  • the earphone interface 170D can be a USB interface 130, or a 3.5mm open mobile terminal platform (OMTP) standard interface, or a cellular telecommunications industry association of the USA (CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA cellular telecommunications industry association of the USA
  • the pressure sensor 180A is used to sense the pressure signal and convert the pressure signal into an electrical signal.
  • pressure sensor 180A may be disposed on display screen 194 .
  • the gyroscope sensor 180B can be used to determine the motion posture of the terminal device 100 .
  • the air pressure sensor 180C is used to measure air pressure.
  • the acceleration sensor 180E can detect the acceleration of the terminal device 100 in various directions (including three axes or six axes). When the terminal device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to recognize the posture of terminal equipment, and can be used in applications such as horizontal and vertical screen switching, pedometers, etc.
  • the distance sensor 180F is used to measure the distance.
  • the ambient light sensor 180L is used for sensing ambient light brightness.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the temperature sensor 180J is used to detect temperature.
  • Touch sensor 180K also known as "touch panel”.
  • the touch sensor 180K can be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, also called a “touch screen”.
  • the touch sensor 180K is used to detect a touch operation on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • Visual output related to the touch operation can be provided through the display screen 194 .
  • the touch sensor 180K may also be disposed on the surface of the terminal device 100 , which is different from the position of the display screen 194 .
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the terminal device 100 may receive key input and generate key signal input related to user settings and function control of the terminal device 100 .
  • the motor 191 can generate a vibrating reminder.
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the SIM card interface 195 is used for connecting a SIM card.
  • all or part may be implemented by software, hardware, firmware or any combination thereof.
  • software When implemented using software, it may be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, a special purpose computer, a computer network or other programmable devices.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website, computer, server or data center Transmission to another website site, computer, server or data center by wired (eg coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer, or may be a data storage device such as a server or a data center integrated with one or more available media.
  • the available medium may be a magnetic medium (for example: floppy disk, hard disk, magnetic tape), an optical medium (for example: digital versatile disc (digital versatile disc, DVD)) or a semiconductor medium (for example: solid state disk (solid state disk, SSD)) wait.
  • a magnetic medium for example: floppy disk, hard disk, magnetic tape
  • an optical medium for example: digital versatile disc (digital versatile disc, DVD)
  • a semiconductor medium for example: solid state disk (solid state disk, SSD)
  • the information including but not limited to user equipment information, user personal information, etc.
  • data including but not limited to data used for analysis, stored data, displayed data, etc.
  • All signals are authorized by the user or fully authorized by all parties, and the collection, use and processing of relevant data need to comply with relevant laws, regulations and standards of relevant countries and regions.
  • the environment information around the target vehicle involved in the embodiment of the present application is obtained under the condition of sufficient authorization.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Traffic Control Systems (AREA)

Abstract

一种虚拟车位确定方法、显示方法、装置、设备、介质及程序,该方法包括:获取目标车辆周围的环境信息,目标车辆为待停泊的车辆,环境信息包括一个或多个已停泊车辆的停泊信息(201);基于一个或多个已停泊车辆的停泊信息确定参考车辆,参考车辆为一个或多个已停泊车辆中的一个(202);基于参考车辆的停泊信息确定目标虚拟车位,目标虚拟车位用于指示目标车辆的停泊位置和停泊方向(203)。该方法能够保证目标车辆基于该虚拟车位自动泊车后与选择的参考车辆形成一致的排列,从而提高了泊车的整齐度和便捷性。

Description

虚拟车位确定方法、显示方法、装置、设备、介质及程序
本申请要求于2021年10月28日提交的申请号为202111266615.5、发明名称为“虚拟车位确定方法、显示方法、装置、设备、介质及程序”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及自动泊车技术领域,特别涉及一种虚拟车位确定方法、显示方法、装置、设备、介质及程序。
背景技术
自动泊车技术是通过对车辆周围的实际环境进行实时的检测,以将车辆自动泊入车位的技术。在采用自动泊车技术将车辆泊入车位的过程中,需要确定虚拟车位,进而基于该虚拟车位来实现自动泊车。因此,如何确定虚拟车位成为目前亟待解决的问题。
发明内容
本申请提供了一种虚拟车位确定方法、显示方法、装置、设备、介质及程序,可以确定出虚拟车位,进而实现自动泊车。所述技术方案如下:
第一方面,提供了一种虚拟车位确定方法,在该方法中,获取目标车辆周围的环境信息,所述目标车辆为待停泊的车辆,所述环境信息包括一个或多个已停泊车辆的停泊信息;基于所述一个或多个已停泊车辆的停泊信息确定参考车辆,所述参考车辆为所述一个或多个已停泊车辆中的一个;基于所述参考车辆的停泊信息确定目标虚拟车位,所述目标虚拟车位用于指示所述目标车辆的停泊位置和停泊方向。
在本申请提供的技术方案中,从目标车辆周围的一个或多个已停泊车辆中选择一个车辆作为参考车辆,基于参考车辆的停泊方向来确定目标虚拟车位,这样能够保证目标车辆基于该虚拟车位自动泊车后与选择的参考车辆形成一致的排列,从而提高了泊车的整齐度和便捷性。先确定参考车辆,再基于参考车辆确定目标虚拟车位,可以为更快速、准确地确定虚拟车位,不管是由用户选择的参考车辆还是由系统确定的参考车辆,参考车辆的停泊位置和停泊方向都是较为准确的或者较优的,能够提升目标车辆停泊后与周围环境的协调性,不会导致目标车辆不合理地影响其他车辆行驶或停泊,也便于用户需要用车时再将目标车辆驾驶出来。目标车辆周围的环境信息包括视觉数据和雷达数据中的至少一种,雷达数据包括超声波雷达数据、激光雷达数据和毫米波雷达数据。也就是说,本申请提供的技术方案能够适用于至少一种数据,从而提高了本申请提供的技术方案的适用范围。
在本申请提供的技术方案中,已停泊车辆可以位于无划线泊车区域,例如无划线的停车场、酒店门口、道路或过道两侧等;也可以停放在划线车位上,特别是已停泊的车辆未按划线车位指示的区域停放,影响目标车辆正常按划线车位空间泊入已停泊车辆的邻近车位的情况。
在环境信息包括的数据不同的情况下,获取目标车辆周围的环境信息的实现方式包括多种。例如,利用车载环视相机对目标车辆周围的实际环境进行采集,以得到目标车辆周围的视觉数据,比如环视图像。利用超声波雷达、激光雷达、毫米波雷达等传感器对目标车辆周围的实际环境进行采集,以得到目标车辆周围的雷达数据。接下来以目标车辆周围的环境信息为环视图像为例,对本申请提供的确定虚拟车位的方法进行详细的解释说明。
由于一个或多个已停泊车辆为目标车辆周围已停泊的车辆,所以,在目标车辆周围的环境信息为环视图像的情况下,环视图像包括该一个或多个已停泊车辆的停泊信息。其中,多个已停泊车辆包括两个及两个以上的已停泊车辆。
基于一个或多个已停泊车辆的停泊信息确定参考车辆的方式包括多种,接下来对其中的两种实现方式进行介绍。
第一种实现方式,显示第一用户界面,所述第一用户界面包括所述一个或多个已停泊车辆的停泊位置和停泊方向,所述一个或多个已停泊车辆的停泊位置和停泊方向根据所述一个或多个已停泊车辆的停泊信息确定。响应于用户的第一操作,显示第二用户界面,所述第二用户界面包括所述参考车辆,所述第一操作用于指示从所述一个或多个已停泊车辆中选择所述参考车辆。
也即是,在显示第一用户界面之后,用户基于第一用户界面触发第一操作。在电子设备检测到用户的第一操作时,响应于用户的第一操作,显示第二用户界面,此时,第二用户界面包括参考车辆,这样,即可从该一个或多个已停泊车辆中确定出参考车辆。
由于第一用户界面中显示有目标车辆周围的一个或多个已停泊车辆的停泊位置和停泊方向,用户通过第一用户界面能够获知目标车辆周围的环境信息,进而能够参考目标车辆周围的环境信息,从该一个或多个已停泊车辆中选取参考车辆,使得用户最终选择的参考车辆满足自身的实际需求,从而满足用户的个性化需求。
第一用户界面的形式包括多种,在第一用户界面的形式不同的情况下,用户基于第一用户界面选择参考车辆的方式也有所不同,接下来分别进行介绍。
在一些实施例中,在第一用户界面中显示目标车辆周围的环视图像和车辆选择区域,车辆选择区域包括一个或多个操作标识,该一个或多个操作标识与该一个或多个已停泊车辆一一对应。检测到用户对该一个或多个操作标识中任一操作标识的第一操作,响应于用户的第一操作,显示第二用户界面。
也即是,在第一用户界面显示目标车辆周围的环视图像和车辆选择区域之后,用户对该车辆选择区域包括的一个或多个操作标识中任一操作标识触发第一操作。此时,将该任一操作标识所对应的已停泊车辆确定为参考车辆,并显示第二用户界面。
由于目标车辆周围的环视图像为真实的环境图,所以,通过在第一用户界面显示目标车辆周围的环视图像,能够使得用户更直观地获知目标车辆周围的环境信息。
作为一种示例,环视图像中还包括目标车辆。可选地,第一用户界面中还包括用于指示目标车辆的图标。这样,在用户从该一个或多个已停泊车辆中选择参考车辆时,能够为用户提供参考,便于用户区分目标车辆和已停泊车辆。同理,第二用户界面中也可以包括目标车辆,而且第二用户界面中还可以包括用于指示目标车辆的图标。
需要说明的是,上述环视图像为二维的环视图像或者三维的环视图像。
在另一些实施例中,基于该一个或多个已停泊车辆的停泊信息,确定该一个或多个已停 泊车辆的停泊位置和停泊方向,按照该一个或多个已停泊车辆的停泊位置和停泊方向,在第一用户界面中显示一个或多个虚拟车辆模型,该一个或多个虚拟车辆模型与该一个或多个已停泊车辆一一对应。检测到用户对该一个或多个虚拟车辆模型中任一虚拟车辆模型的第一操作,响应于用户的第一操作,显示第二用户界面。
也即是,在第一用户界面显示一个或多个虚拟车辆模型之后,用户对该一个或多个虚拟车辆模型中任一虚拟车辆模型触发第一操作。此时,将该任一虚拟车辆模型所对应的已停泊车辆确定为参考车辆,并显示第二用户界面。
在第一用户界面显示一个或多个虚拟车辆模型的情况下,用户能够直接对虚拟车辆模型进行操作,而无需单独提供一个车辆选择区域,也就无需用户确认车辆选择区域中的哪个操作标识对应哪个已停泊车辆,从而能够提高确定参考车辆的效率。
作为一种示例,第一用户界面中还包括目标车辆对应的虚拟车辆模型,而且第一用户界面中还包括用于指示目标车辆的图标。这样,在用户从该一个或多个已停泊车辆中选择参考车辆时,能够为用户提供参考,便于用户区分目标车辆和已停泊车辆。同理,第二用户界面中也可以包括目标车辆对应的虚拟车辆模型,而且第二用户界面中还可以包括用于指示目标车辆的图标。
需要说明的是,上述的虚拟车辆模型可以为二维的虚拟车辆模型,也可以为三维的虚拟车辆模型。
其中,用户的第一操作包括用户在第一用户界面上的触摸、敲击和滑动动作中的任意一种。例如,以上述虚拟车辆模型为例,用户通过触摸虚拟车辆模型来选择参考车辆,或者敲击虚拟车辆模型来选择参考车辆,又或者滑动虚拟车辆模型来选择参考车辆。又例如,以上述操作标识为例,用户通过触摸操作标识来选择参考车辆,或者敲击操作标识来选择参考车辆,又或者滑动操作标识来选择参考车辆,本申请对此不做限定。
在一种可能的实现方式中,第二用户界面只包括参考车辆,也即是第二用户界面不包括其他已停泊车辆。或者,在另一种可能的实现方式中,第二用户界面不仅包括参考车辆,还包括其他已停泊车辆。示例地,第二用户界面还包括第二车辆,第二车辆是一个或多个已停泊车辆中除参考车辆以外的任意车辆,参考车辆的显示方式与第二车辆的显示方式不同。例如,参考车辆的显示颜色与其他已停泊车辆的显示颜色不同,或者,参考车辆的轮廓线与其他已停泊车辆的轮廓线的粗细不同,又或者,参考车辆的背景纹理与其他已停泊车辆的背景纹理不同等等,总之,用户能从视觉上区分第二用户界面包括的参考车辆和其他已停泊车辆。
在一些实施例中,第二用户界面中还包括指示标识,该指示标识用于指示参考车辆。
由于该一个或多个已停泊车辆中每个已停泊车辆的停泊位置和停泊方向的确定方式相同,因此,接下来以该一个或多个已停泊车辆中的任意车辆为例进行介绍。为了便于描述,将该任意车辆称为第一车辆,也即是,对于该一个或多个已停泊车辆中的第一车辆,电子设备将环视图像输入至车辆检测模型,以得到第一车辆的停泊位置和局部图像,局部图像为第一车辆在环视图像中所处的图像区域。之后,按照下述步骤(1)-(2),确定第一车辆的停泊方向。
(1)将第一车辆的停泊信息输入至关键信息检测模型,以确定第一车辆的多个关键点的属性信息和多条关键线的属性信息。
在一些实施例中,第一车辆的停泊信息为第一车辆的局部图像,将第一车辆的局部图像 输入至关键信息检测模型,以得到关键信息检测模型输出的第一车辆的多个关键点的属性信息和多条关键线的属性信息。
关键点的属性信息包括关键点位置、关键点类别和关键点可见性中的至少一个,关键点可见性用于指示对应的关键点是否被遮挡。关键线的属性信息包括关键线中心点位置、关键线可见性、关键线倾斜度和关键线长度中的至少一个,关键线可见性用于指示对应的关键线是否被遮挡。
其中,关键点包括四个车轮中心点、车身中心点、车标中心点以及两个尾灯中心点等等。第一车辆的关键线包括车辆前后安装车牌位置处的竖直中心线、车标与车顶的竖直中心线等等。这些关键点和关键线能够进行多种组合来确定第一车辆的停泊方向。
(2)将第一车辆的多个关键点的属性信息和多条关键线的属性信息输入至姿态估计模型,以确定第一车辆的停泊方向。
在一些实施例中,将第一车辆的多个关键点的属性信息和多条关键线的属性信息输入至姿态估计模型,以得到姿态估计模型输出的第一车辆在局部图像的图像坐标系中的停泊方向,将第一车辆在局部图像的图像坐标系中的停泊方向转换到目标车辆的车身坐标系下,以得到第一车辆的停泊方向。
基于上文描述,停泊方向包括车头朝向和车身方向,为了更精确地描述车身姿态,除了确定车身方向之外,还需要确定车身角度。此时,姿态估计模型输出的停泊方向不仅包括车头朝向和车身方向,还包括车身角度。由于车载环视相机的外参会对车身角度产生一定影响,所以,在姿态估计模型输出车身角度之后,还需要在该车身角度的基础上进行外参补偿。即,确定补偿角度,该补偿角度为车载环视相机的焦点与第一车辆的中心点的连线与车载环视相机的成像平面之间的夹角。将姿态估计模型输出的车身角度与补偿角度相加,得到第一车辆在局部图像的图像坐标系中的车身角度。之后,将第一车辆在局部图像的图像坐标系中的停泊方向转换到目标车辆的车身坐标系下,以得到第一车辆的车身方向。
本申请提供的技术方案通过关键点和关键线的属性信息来确定车辆的停泊方向,而对于同一车辆,通过仿真数据、CAD等方式能够较为容易地获取到不同的关键点和关键线的属性信息,进而能够获取大量的样本,通过这些样本对关键信息检测模型和姿态估计模型进行训练,能够提高确定车辆停泊方向的准确性和鲁棒性。
在第一车辆因遮挡导致确定出的多个关键点和多条关键线的稳定性下降的情况下,为了提升确定出的第一车辆的停泊方向的准确性,本申请能够基于多个环视图像来确定第一车辆的停泊方向。也即是,将多个环视图像进行融合来确定第一车辆的停泊方向。
第二种实现方式,基于所述一个或多个已停泊车辆的停泊信息,确定所述一个或多个已停泊车辆的停泊位置和停泊方向。基于所述一个或多个已停泊车辆的停泊位置和停泊方向,采用预先设置的模型确定所述参考车辆。
作为一种示例,基于该一个或多个已停泊车辆的停泊信息,确定该一个或多个已停泊车辆的停泊位置和停泊方向。基于该一个或多个已停泊车辆的停泊位置确定可泊车空间,可泊车空间为泊车区域中除该一个或多个已停泊车辆的停泊位置之外的区域。确定目标车辆与可泊车空间之间的距离,以及确定目标车辆的行进方向,将目标车辆与可泊车空间之间的距离、目标车辆的行进方向,以及该一个或多个已停泊车辆的停泊方向输入至预先设置的模型,以确定参考车辆。
需要说明的是,预先设置的模型是事先基于多个样本车辆训练得到的,示例地,通过强化学习的方式训练得到。另外,基于该一个或多个已停泊车辆的停泊信息,确定该一个或多个已停泊车辆的停泊位置和停泊方向的实现过程参考上述第一种实现方式中的相关描述,此处不再赘述。而且,基于该一个或多个已停泊车辆的停泊位置确定可泊车空间的实现方式将在下文进行描述,此处先不展开论述。
本申请不仅能够通过预先设置的模型来确定参考车辆,还能够按照泊车姿态规则来确定参考车辆。也即是,基于该一个或多个已停泊车辆的停泊信息,确定该一个或多个已停泊车辆的停泊位置和停泊方向。基于该一个或多个已停泊车辆的停泊位置和停泊方向,采用泊车姿态规则确定参考车辆。
其中,泊车姿态规则是指按照车身方向的优先级确定参考车辆的规则。示例地,车身方向的优先级从高到低的顺序依次为竖直方向、水平方向、斜向。也即是,若该一个或多个已停泊车辆中存在车身方向为竖直方向的已停泊车辆,则将该车身方向为竖直方向的已停泊车辆确定为参考车辆。若该一个或多个已停泊车辆中不存在车身方向为竖直方向的已停泊车辆,但存在车身方向为水平方向的已停泊车辆,则将车身方向为水平方向的已停泊车辆确定为参考车辆。若该一个或多个已停泊车辆中不存在车身方向为竖直方向的已停泊车辆且不存在车身方向为水平方向的已停泊车辆,但存在车身方向为斜向的已停泊车辆,则将车身方向为斜向的已停泊车辆确定为参考车辆。
需要说明的是,在同时存在多个满足条件的已停泊车辆时,随机选择一个车辆作为参考车辆,或者按照其他的规则选择一个车辆作为参考车辆,比如,选择距离目标车辆最近的一个车辆作为参考车辆,本申请对此不作限定。
在上述第二种实现方式中,当确定出一个或多个已停泊车辆的停泊位置和停泊方向之后,能够采用预先设置的模型或者泊车姿态规则,自动地确定参考车辆,这样能够避免用户手动选择参考车辆,从而能够简化用户的操作。
目标车辆的停泊方向包括目标车辆的车头朝向和车身方向。目标车辆的车身方向为目标车辆的车身相对于参照物的方向,该参照物包括道路基线、参考车辆或者其他参照物。例如,以目标车辆的车身方向为目标车辆的车身相对于参考车辆的方向为例,目标车辆的车身方向与参考车辆的车身平行、垂直、倾斜。
其中,基于参考车辆的停泊信息确定目标虚拟车位的实现方式包括:基于参考车辆的停泊信息,确定参考车辆的停泊方向。基于一个或多个已停泊车辆的停泊信息,确定可泊车空间。基于参考车辆的停泊方向以及可泊车空间,确定目标虚拟车位。
在一些实施例中,提取环视图像中的地面区域,提取地面区域包括的多个像素点中每个像素点的特征,基于该多个像素点的特征,对该多个像素点进行聚类,以得到多个区域,从该多个区域中确定泊车区域,并基于该一个或多个已停泊车辆的停泊信息,确定泊车区域中可泊车空间。
作为一种示例,将环视图像作为地面分割模型的输入,以得到地面分割模型输出的地面区域。将地面区域作为特征提取模型的输入,以得到特征提取模型输出的该地面区域包括的多个像素点的特征。基于该多个像素点的特征,对该多个像素点进行聚类,以得到多个区域。确定该多个区域中每个区域对应的区域特征,基于该多个区域的区域特征确定该多个区域中每个区域的语义类别。若该多个区域中存在语义类别为泊车类别的区域,则将语义类别为泊 车类别的区域确定为泊车区域,并基于该一个或多个已停泊车辆的停泊信息,从该泊车区域中确定可泊车空间。若该多个区域中不存在语义类别为泊车类别的区域,则基于多个区域的区域特征和语义类别,从该多个区域中确定泊车区域,以及基于该一个或多个已停泊车辆的停泊信息,从泊车区域中确定可泊车空间。
需要说明的是,地面区域包括泊车区域、道路区域、井盖区域以及草坪区域等等。基于该多个像素点的特征对该多个像素点进行聚类是指:将特征之间的距离相近的像素点划分为一个区域,从而得到多个区域。
其中,确定该多个区域中每个区域对应的区域特征的实现方式包括多种。比如,对于其中一个区域来说,对该区域包括的所有像素点的特征取平均值,以得到该区域对应的区域特征。或者,将该区域包括的所有像素点的特征进行融合,以得到该区域对应的区域特征,比如将该区域包括的所有像素点的特征组成一个矩阵,将该矩阵作为该区域的区域特征。
基于该多个区域的区域特征确定该多个区域中每个区域的语义类别的实现过程包括:对于该多个区域中每个区域来说,确定该区域对应的区域特征与存储的区域特征与语义类别之间的对应关系中的各个区域特征之间的距离,将与该区域对应的区域特征之间的距离最近的区域特征所对应的语义类别,确定为该区域的语义类别。
需要说明的是,为了提升聚类效果以及提升每个区域对应的语义类别的准确性,本申请能够通过多个环视图像进行多帧融合。即,对于该多个环视图像来说,均按照上述方法确定出每个环视图像中的地面区域,以得到多个地面区域。然后获取该多个地面区域中相互重叠的区域,之后,按照上述方法提取重叠区域的每个像素点的特征,进而进行聚类来确定可泊车空间。
基于参考车辆的停泊方向和可泊车空间,确定目标虚拟车位的方式包括多种,接下来将分别进行说明。
第一种实现方式,基于参考车辆的停泊方向以及可泊车空间,确定多个候选虚拟车位,响应于用户的第二操作,从该多个候选虚拟车位中确定目标虚拟车位。
作为一种示例,基于参考车辆的停泊方向以及可泊车空间,确定多个候选虚拟车位,显示第四用户界面,第四用户界面包括该多个候选虚拟车位。响应于用户的第二操作,显示第三用户界面,第三用户界面包括目标虚拟车位。
也即是,基于参考车辆的停泊方向以及可泊车空间,确定出多个候选虚拟车位之后,显示第四用户界面。用户在第四用户界面中触发第二操作,以从该多个候选虚拟车位中确定出目标虚拟车位。
在一些实施例中,第三用户界面还显示有可泊车空间,目标虚拟车位位于可泊车空间内。
其中,基于参考车辆的停泊方向以及可泊车空间,确定多个候选虚拟车位的实现过程包括:将参考车辆的停泊方向作为目标车辆的停泊方向,在可泊车空间中确定多个候选虚拟车位,以使该多个候选虚拟车位所指示的停泊方向为目标车辆的停泊方向。也即是,将参考车辆的停泊方向直接作为目标车辆的停泊方向,进而在可泊车空间中确定多个候选虚拟车位。
当然,用户可能对参考车辆的停泊方向不满意,所以电子设备显示第二用户界面,第二用户界面包括参考车辆,第二用户界面还能够指示参考车辆的停泊方向,将参考车辆的停泊方向作为参考停泊方向。此时,响应于用户的第三操作,第三操作用于对参考停泊方向进行调整,将调整后的停泊方向确定为目标车辆的停泊方向。基于目标车辆的停泊方向,在可泊 车空间中确定多个候选虚拟车位,以使该多个候选虚拟车位所指示的停泊方向为目标车辆的停泊方向。
可选地,在目标车辆周围存在多个已停泊车辆的情况下,该多个已停泊车辆可能分布于目标车辆的行驶道路的一侧,也可能分布于目标车辆的行驶道路的两侧。在该多个已停泊车辆分布于目标车辆的行驶道路的两侧时,目标虚拟车辆是基于目标车辆的行驶道路两侧的参考车辆的停泊信息确定的。也即是,可以按照上述方法,从目标车辆的行驶道路两侧的已停泊车辆中分别确定一个参考车辆。这样,按照上述方法从该行驶道路的两侧分别确定出一个可泊车空间,进而基于目标车辆的行驶道路两侧的参考车辆,在该行驶道路两侧的可泊车空间中分别确定出多个候选虚拟车位,从而确定按照上述方法目标虚拟车位。
可选地,不管是通过多个虚拟车辆模型来表征多个候选虚拟车位,还是通过黑色矩形框或其他显示方式来表征多个候选虚拟车位。在确定出多个候选虚拟车位之后,还可以显示该多个候选虚拟车位对应的车头朝向。
第二种实现方式,基于参考车辆的停泊方向以及可泊车空间,确定候选虚拟车位。如果候选虚拟车位的数量为一个,则直接将该候选虚拟车位作为目标虚拟车位。如果该候选虚拟车位的数量为多个,则从多个候选虚拟车位中选择一个候选虚拟车位作为目标虚拟车位。
其中,基于参考车辆的停泊方向以及可泊车空间,确定候选虚拟车位的方式参考上述第一种实现方式,此处不再赘述。
另外,从多个候选虚拟车位中选择一个候选虚拟车位作为目标虚拟车位的实现方式包括多种。例如,从该多个候选虚拟车位中选择一个候选虚拟车位作为目标虚拟车位推荐给用户。或者,将该多个候选虚拟车位推荐给用户,由用户选择一个候选虚拟车位作为目标虚拟车位。其中,从该多个候选虚拟车位中选择一个候选虚拟车位作为目标虚拟车位推荐给用户时,可以结合目标车辆当前的位置与候选虚拟车位之间的距离,从该多个候选虚拟车位中选择距离最近的一个候选虚拟车位,作为目标虚拟车位推荐给用户,当然还可以通过其他的方式选择一个候选虚拟车位推荐给用户。
作为一种示例,从多个候选虚拟车位中选择一个候选虚拟车位之后,显示第五用户界面,第五用户界面包括推荐的虚拟车位。响应于用户的第四操作,显示第三用户界面,第四操作用于指示用户确认将推荐的虚拟车位作为目标虚拟车位。
作为另一种示例,从多个候选虚拟车位中选择一个候选虚拟车位之后,显示第五用户界面,第五用户界面包括推荐的虚拟车位。响应于用户的第五操作,显示第四用户界面,第四用户界面包括该多个候选虚拟车位,第五操作用于指示用户对推荐的虚拟车位的停泊位置不满意。响应于用户的第二操作,显示第三用户界面,第二操作用于从该多个候选虚拟车位中选择目标虚拟车位。
也即是,从多个候选虚拟车位中选择一个候选虚拟车位,作为目标虚拟车位推荐给用户时,用户可能会直接接受推荐的虚拟车位,即将推荐的虚拟车位作为目标虚拟车位。当然,用户可能对推荐的虚拟车位的停泊位置不满意,此时,需要将该多个候选虚拟车位全部推荐给用户,由用户选择一个候选虚拟车位,作为目标虚拟车位。
第三种实现方式,第二用户界面中还包括可泊车空间,响应于用户的第六操作,第六操作用于从可泊车空间中选择一个位置作为目标车辆的停泊位置。基于参考车辆的停泊方向和目标车辆的停泊位置,确定目标虚拟车位。
也即是,用户在可泊车空间中选择一个位置作为目标车辆的停泊位置,然后基于参考车辆的停泊方向和目标车辆的停泊位置,确定目标虚拟车位。
基于参考车辆的停泊方向和目标车辆的停泊位置,确定目标虚拟车位时,可直接将参考车辆的停泊方向作为目标车辆的停泊方向,在可泊车空间的目标车辆的停泊位置处确定目标虚拟车位,以使目标虚拟车位所指示的停泊方向为目标车辆的停泊方向。当然,用户可能对参考车辆的停泊方向不满意,所以电子设备显示第二用户界面,第二用户界面包括参考车辆,第二用户界面还能够指示参考车辆的停泊方向,将参考车辆的停泊方向作为参考停泊方向。此时,响应于用户的第三操作,第三操作用于对参考停泊方向进行调整,将调整后的停泊方向确定为目标车辆的停泊方向。基于目标车辆的停泊方向,在可泊车空间的目标车辆的停泊位置处确定目标虚拟车位,以使目标虚拟车位所指示的停泊方向为目标车辆的停泊方向。
上述内容是在目标车辆的周围存在已停泊车辆的情况下,确定目标虚拟车位的实现过程。在某些情况下,目标车辆的周围可能不存在已停泊车辆,此时,电子设备对可泊车空间进行三维空间测量,以确定可泊车空间的深度。之后,基于可泊车空间的深度与目标车辆的车身长度之间的比值,确定目标车辆的停泊方向。并在可泊车空间中确定目标车辆的停泊位置,进而确定目标虚拟车位。
作为一种示例,若可泊车空间的深度与目标车辆的车身长度之间的比值大于第一比例阈值,则确定目标车辆的车身方向相对于道路基线为竖直方向。若可泊车空间的深度与目标车辆的车身长度之间的比值小于第二比例阈值,则确定目标车辆的车身方向相对于道路基线为水平方向。若可泊车空间的深度与目标车辆车身长度之间的比值小于第一比例阈值但是大于第二比例阈值,则确定目标车辆的车身方向相对于道路基线为斜向,其斜向角度为可泊车空间的深度与目标车辆车身长度的反正弦值。
需要说明的是,第一比例阈值和第二比例阈值为事先设置的,而且能够按照不同的需求来调整。例如,第一比例阈值为0.9,第二比例阈值为0.7。
另外,在可泊车空间中确定目标车辆的停泊位置时,可参考上述方法,由用户在可泊车空间中选择一个位置作为目标车辆的停泊位置。当然,电子设备也可参考上述方法,按照目标车辆的停泊方向,在可泊车空间中确定多个候选虚拟车位,由用户选择一个候选虚拟车位作为目标虚拟车位。其中,用户在可泊车空间中选择一个位置作为目标车辆的停泊位置的方式,以及用户从多个候选虚拟车位中选择一个候选虚拟车位作为目标虚拟车位的方式参考前文描述,此处不再赘述。
第二方面,提供了一种用于辅助泊车的显示方法,在该方法中,显示第一用户界面,所述第一用户界面用于显示目标车辆周围的环境信息,所述目标车辆为待停泊的车辆,所述环境信息包括一个或多个已停泊车辆的停泊信息。响应于用户的第一操作,显示第二用户界面,所述第二用户界面包括参考车辆,所述参考车辆为所述一个或多个已停泊车辆中的一个。显示第三用户界面,所述第三用户界面包括目标虚拟车位,所述目标虚拟车位用于指示所述目标车辆的停泊位置和停泊方向。
可选地,所述第二用户界面还包括第二车辆,所述第二车辆是所述一个或多个已停泊车辆中除所述参考车辆以外的任意车辆;所述参考车辆的显示方式与所述第二车辆的显示方式不同。
可选地,所述第二用户界面还包括指示标识,所述指示标识用于指示所述参考车辆。
可选地,所述显示第三用户界面,包括:显示第四用户界面,所述第四用户界面包括多个候选虚拟车位;响应于所述用户的第二操作,显示所述第三用户界面,所述目标虚拟车位为所述多个候选虚拟车位中的一个。
可选地,所述第三用户界面还显示有可泊车空间,所述目标虚拟车位位于所述可泊车空间内。
可选地,所述第一用户界面包括一个或多个操作标识,所述一个或多个操作标识与所述一个或多个已停泊车辆一一对应。
可选地,所述第一用户界面显示的所述环境信息为相机或雷达获取的图像信息。
可选地,所述第一用户界面显示的所述环境信息为根据传感器获取的信息生成的虚拟环境信息。
可选地,所述第三用户界面还包括用于指示所述目标车辆的图标。
可选地,所述用户的第一操作包括所述用户在所述第一用户界面上的触摸、敲击和滑动动作中的任意一种。
第三方面,提供了一种虚拟车位确定装置,所述装置具有实现上述第一方面中虚拟车位确定方法行为的功能。所述装置包括至少一个模块,该至少一个模块用于实现上述第一方面所提供的虚拟车位确定方法。
第四方面,提供了一种用于辅助泊车的显示装置,所述装置具有实现上述第二方面中用于辅助泊车的显示方法行为的功能。所述装置包括至少一个模块,该至少一个模块用于实现上述第二方面所提供的用于辅助泊车的显示方法。
第五方面,提供了一种电子设备,所述电子设备包括处理器和存储器,所述存储器用于存储执行上述第一方面所提供的虚拟车位确定方法的计算机程序。所述处理器被配置为用于执行所述存储器中存储的计算机程序,以实现上述第一方面所述的虚拟车位确定方法。
可选地,所述电子设备还可以包括通信总线,该通信总线用于该处理器与存储器之间建立连接。
第六方面,提供了一种电子设备,所述电子设备包括处理器和存储器,所述存储器用于存储执行上述第二方面所提供的用于辅助泊车的显示方法的计算机程序。所述处理器被配置为用于执行所述存储器中存储的计算机程序,以实现上述第二方面所述的用于辅助泊车的显示方法。
可选地,所述电子设备还可以包括通信总线,该通信总线用于该处理器与存储器之间建立连接。
第七方面,提供了一种计算机可读存储介质,所述存储介质内存储有指令,当所述指令在计算机上运行时,使得计算机执行上述第一方面所述的虚拟车位确定方法的步骤。
第八方面,提供了一种计算机可读存储介质,所述存储介质内存储有指令,当所述指令在计算机上运行时,使得计算机执行上述第二方面所述的用于辅助泊车的显示方法的步骤。
第九方面,提供了一种包含指令的计算机程序产品,当所述指令在计算机上运行时,使得计算机执行上述第一方面所述的虚拟车位确定方法的步骤。或者说,提供了一种计算机程序,当所述计算机程序在计算机上运行时,使得计算机执行上述第一方面所述的虚拟车位确定方法的步骤。
第十方面,提供了一种包含指令的计算机程序产品,当所述指令在计算机上运行时,使得计算机执行上述第二方面所述的用于辅助泊车的显示方法的步骤。或者说,提供了一种计算机程序,当所述计算机程序在计算机上运行时,使得计算机执行上述第二方面所述的用于辅助泊车的显示方法的步骤。
上述第三方面至第十方面所获得的技术效果与第一方面和第二方面中对应的技术手段获得的技术效果近似,在这里不再赘述。
本申请提供的技术方案至少包括以下有益效果:
在本申请提供的技术方案中,从目标车辆周围的一个或多个已停泊车辆中选择一个车辆作为参考车辆,基于参考车辆的停泊方向来确定目标虚拟车位,这样能够保证目标车辆基于该虚拟车位自动泊车后与选择的参考车辆形成一致的排列,从而提高了泊车的整齐度和便捷性。
附图说明
图1是本申请实施例提供的一种电子设备的结构示意图;
图2是本申请实施例提供的一种确定虚拟车位的方法的流程图;
图3是本申请实施例提供的一种第一用户界面的示意图;
图4是本申请实施例提供的另一种第一用户界面的示意图;
图5是本申请实施例提供的一种第二用户界面的示意图;
图6是本申请实施例提供的一种关键点和关键线的示意图;
图7是本申请实施例提供的一种确定第一车辆的停泊方向的流程图;
图8是本申请实施例提供的另一种确定第一车辆的停泊方向的流程图;
图9是本申请实施例提供的一种确定每个区域的语义类别的流程图;
图10是本申请实施例提供的一种确定每个区域的语义类别的流程图;
图11是本申请实施例提供的一种用户调整停泊方向的示意图;
图12是本申请实施例提供的一种确定多个候选虚拟车位的示意图;
图13是本申请实施例提供的另一种确定多个候选虚拟车位的示意图;
图14是本申请实施例提供的一种基于参考车辆确定出的目标虚拟车位的示意图;
图15是本申请实施例提供的另一种基于参考车辆确定出的目标虚拟车位的示意图;
图16是本申请实施例提供的又一种确定多个候选虚拟车位的示意图;
图17是本申请实施例提供的再一种确定多个候选虚拟车位的示意图;
图18是本申请实施例提供的一种显示虚拟车位的车头朝向的示意图;
图19是本申请实施例提供的一种用户选择目标虚拟车位的示意图;
图20是本申请实施例提供的一种用户选择目标车辆的停泊位置的示意图;
图21是本申请实施例提供的另一种用户选择目标车辆的停泊位置的示意图;
图22是本申请实施例提供的一种用于辅助泊车的显示方法的流程图;
图23是本申请实施例提供的一种虚拟车位确定装置的框图;
图24是本申请实施例提供的一种用于辅助泊车的显示装置的框图;
图25是本申请实施例提供的一种电子设备的结构示意图;
图26是本申请实施例提供的一种终端设备的结构示意图;
图27是本申请实施例提供的另一种终端设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
为了便于理解,在对本申请实施例提供的确定虚拟车位的方法进行详细地解释说明之前,先对本申请实施例涉及的术语进行解释。
虚拟车位:是指车辆在自动泊车时虚构出的车位,在有交互界面的情况下,界面中会显示虚拟车位,车辆最终泊入虚拟车位时车辆的位置与真实泊车区域中的位置是一致的。
停泊方向:停泊方向包括车头朝向和车身方向。车身方向是指车身相对于参照物的方向。该参照物包括道路基线、参考车辆或者其他参照物。
其中,车头朝向包括正对目标车辆行驶方向、背对目标车辆行驶方向,车身方向包括正东、正南、正西、正北、东南、东北、西南、西北等8个方向。为了更精确地描述车身姿态,除了确定车身方向之外,还需要确定车身角度,该车身角度是指车身与参照物之间的夹角。
需要说明的是,上述8个方向仅为一种示例,在其他示例中,还能够以45°角为间隔来任意划分出8个方向,本申请实施例对此不作限定。
划线车位:地面上划有车位线、或者带有明显提示(如整块的颜色、不同纹理的砖块、立体限位设备等)的车位。
无划线泊车区域:地面上没有车位线或者车位提示标志的泊车区域。
寻泊:车辆在停入车位前寻找车位的过程。
本申请实施例提供的虚拟车位确定方法能够应用于多种场景,比如,停车位推荐、自动泊车(auto parking asist,APA)、远程遥控泊车(remote parking asist,RPA)、自动代客泊车(automated valet parking,AVP)、记忆式泊车(home zone parking,HZP)等各种辅助驾驶系统或自动驾驶系统。而且,对于划线车位,以及无划线泊车区域都是适用的。对于无划线泊车区域的情况,例如无划线的停车场、酒店办公场所出入口、道路或通行过道两侧临时停车等场景,本申请的提供的技术方案可以根据泊车区域中停泊车辆的停泊信息、泊车区域的空间信息自动生成虚拟车位,无需用户多次调整虚拟车位位置。对于划线车位的情况,本申请的实施例也可以不受划线车位的指示影响,使目标车辆泊入用户所需的停车区域,例如当用 户所需停放的划线车位被相邻一侧停放的车辆部分占用,用户无法按划线车位指示停入目标车辆时,可以借用另一侧空间使目标车辆与一侧停放的车辆平行停放。
APA是生活中最常见的泊车辅助系统。在待停泊的目标车辆低速寻泊时,APA系统使用超声波雷达获取目标车辆周围的环境信息,帮助用户从可泊车空间中搜索足以泊入目标车辆的虚拟车位,并在用户发送泊车指令后,基于该虚拟车位来实现自动泊车。
RPA是在APA的基础之上发展而来的,主要应用于狭窄车位,以此来解决泊车后车门难以打开的问题。例如,用户先在车内开启RPA系统,RPA系统搜索并确定虚拟车位,用户在车外使用遥控装置发送泊车指令,RPA系统基于该虚拟车位来实现自动泊车。
AVP是系统搜索并确定虚拟车位,基于该虚拟车位来实现自动泊车,然后,发送停泊位置的位置信息给用户。
HZP是目标车辆先行驶到固定车位,在固定车位中确定虚拟车位,进而将目标车辆泊入虚拟车位。但是在自动泊车之前,用户需要针对固定的行驶路径和固定车位进行一遍录制,以使目标车辆“学习”该过程,“学习”完成后,目标车辆便可从固定行驶路径一侧的起始点开始实现自动泊入或者泊出。
本申请实施例的执行主体为车载终端。也即是,车载终端按照本申请实施例提供的方式确定虚拟车位之后,即可实现目标车辆的自动泊车。当然,在目标车辆当前需要停放在停车场时,本申请实施例的执行主体为车载终端或者停车场管理设备。在执行主体为停车场管理设备的情况下,停车场管理设备按照本申请实施例提供的方法确定虚拟车位之后,将虚拟车位的相关信息发送给车载终端,由车载终端来实现目标车辆的自动泊车。
为了便于描述,将本申请实施例的执行主体统称为电子设备。请参考图1,图1是根据本申请实施例示出的一种电子设备的结构示意图。该电子设备包括环境信息获取模块、计算模块和人机交互模块。环境信息获取模块用于获取目标车辆周围的环境信息,比如一个或多个已停泊车辆的停泊信息。计算模块和人机交互模块相互配合来确定目标虚拟车位,比如目标车辆的停泊位置和停泊方向。最终通过人机交互模块显示目标虚拟车位。
其中,电子设备是任何一种可与用户通过键盘、触摸板、触摸屏、遥控器、语音交互或手写设备等一种或多种方式进行人机交互的电子产品,例如个人计算机(personal computer,PC)、手机、智能手机、个人数字助手(personal digital assistant,PDA)、可穿戴设备、掌上电脑(pocket pc,PPC)、平板电脑、智能车机等。
本领域技术人员应能理解上述应用场景和电子设备仅为举例,其他现有的或今后可能出现的应用场景和电子设备如可适用于本申请实施例,也应包含在本申请实施例保护范围以内,并在此以引用方式包含于此。
接下来对本申请实施例提供的确定虚拟车位的方法进行详细的解释说明。
图2是本申请实施例提供的一种确定虚拟车位的方法流程图,该方法可以应用于上述电子设备。请参考图2,该方法包括如下步骤。
步骤201:获取目标车辆周围的环境信息,目标车辆为待停泊的车辆,目标车辆周围的环境信息包括一个或多个已停泊车辆的停泊信息。
目标车辆周围的环境信息包括视觉数据和雷达数据中的至少一种,雷达数据包括超声波 雷达数据、激光雷达数据和毫米波雷达数据。也就是说,本申请提供的技术方案能够适用于至少一种数据,从而提高了本申请提供的技术方案的适用范围。
在环境信息包括的数据不同的情况下,获取目标车辆周围的环境信息的实现方式包括多种。例如,利用车载环视相机对目标车辆周围的实际环境进行采集,以得到目标车辆周围的视觉数据,比如环视图像。利用超声波雷达、激光雷达、毫米波雷达等传感器对目标车辆周围的实际环境进行采集,以得到目标车辆周围的雷达数据。接下来以目标车辆周围的环境信息为环视图像为例,对本申请实施例提供的确定虚拟车位的方法进行详细的解释说明。
由于一个或多个已停泊车辆为目标车辆周围已停泊的车辆,所以,在目标车辆周围的环境信息为环视图像的情况下,环视图像包括该一个或多个已停泊车辆的停泊信息。其中,多个已停泊车辆包括两个及两个以上的已停泊车辆。
步骤202:基于该一个或多个已停泊车辆的停泊信息确定参考车辆,参考车辆为该一个或多个已停泊车辆中的一个。
基于该一个或多个已停泊车辆的停泊信息确定参考车辆的方式包括多种,接下来对其中的两种实现方式进行介绍。
第一种实现方式,显示第一用户界面,第一用户界面包括该一个或多个已停泊车辆的停泊位置和停泊方向,该一个或多个已停泊车辆的停泊位置和停泊方向根据该一个或多个已停泊车辆的停泊信息确定。响应于用户的第一操作,显示第二用户界面,第二用户界面包括参考车辆,第一操作用于指示从该一个或多个已停泊车辆中选择参考车辆。
也即是,在显示第一用户界面之后,用户基于第一用户界面触发第一操作。在电子设备检测到用户的第一操作时,响应于用户的第一操作,显示第二用户界面,此时,第二用户界面包括参考车辆,这样,即可从该一个或多个已停泊车辆中确定出参考车辆。
由于第一用户界面中显示有目标车辆周围的一个或多个已停泊车辆的停泊位置和停泊方向,用户通过第一用户界面能够获知目标车辆周围的环境信息,进而能够参考目标车辆周围的环境信息,从该一个或多个已停泊车辆中选取参考车辆,使得用户最终选择的参考车辆满足自身的实际需求,从而满足用户的个性化需求。
第一用户界面的形式包括多种,在第一用户界面的形式不同的情况下,用户基于第一用户界面选择参考车辆的方式也有所不同,接下来分别进行介绍。
在一些实施例中,在第一用户界面中显示目标车辆周围的环视图像和车辆选择区域,车辆选择区域包括一个或多个操作标识,该一个或多个操作标识与该一个或多个已停泊车辆一一对应。检测到用户对该一个或多个操作标识中任一操作标识的第一操作,响应于用户的第一操作,显示第二用户界面。
也即是,在第一用户界面显示目标车辆周围的环视图像和车辆选择区域之后,用户对该车辆选择区域包括的一个或多个操作标识中任一操作标识触发第一操作。此时,将该任一操作标识所对应的已停泊车辆确定为参考车辆,并显示第二用户界面。
由于目标车辆周围的环视图像为真实的环境图,所以,通过在第一用户界面显示目标车辆周围的环视图像,能够使得用户更直观地获知目标车辆周围的环境信息。
作为一种示例,环视图像中还包括目标车辆。可选地,第一用户界面中还包括用于指示目标车辆的图标。这样,在用户从该一个或多个已停泊车辆中选择参考车辆时,能够为用户提供参考,便于用户区分目标车辆和已停泊车辆。同理,第二用户界面中也可以包括目标车 辆,而且第二用户界面中还可以包括用于指示目标车辆的图标。
需要说明的是,对于第一用户界面和第二用户界面中的任一用户界面来说,在该用户界面显示有已停泊车辆和目标车辆的情况下,能够按照上述方式,在该用户界面中显示用于指示目标车辆的图标,以区分已停泊车辆和目标车辆。当然,还能够通过其他的方式来区分已停泊车辆和目标车辆。另外,上述环视图像为二维的环视图像或者三维的环视图像。
例如,第一用户界面如图3所示,图3中包括两个区域,分别为用于显示环视图像的区域以及车辆选择区域。其中,环视图像中包括目标车辆和两个已停泊车辆,目标车辆的车尾附近显示有一个三角形的图标。车辆选择区域中包括的操作标识通过车牌号来表示。用户通过选择任一车牌号来选择参考车辆,比如在图3中,用户选择车牌号为“陕A·xxx12”的车辆为参考车辆。
其中,对于图3中的车辆俯视图来说,这个车辆的尾部线条为矩形,车辆的头部线条为梯形。也即是,矩形线条的部位为车尾,梯形线条的部位为车头。比如,以目标车辆为例,靠近三角形图标的部位为车尾,远离三角形图标的部位为车头。下述所提及车辆的车尾位置和车头位置均与此处的描述相同,后续不再赘述。
在另一些实施例中,基于该一个或多个已停泊车辆的停泊信息,确定该一个或多个已停泊车辆的停泊位置和停泊方向,按照该一个或多个已停泊车辆的停泊位置和停泊方向,在第一用户界面中显示一个或多个虚拟车辆模型,该一个或多个虚拟车辆模型与该一个或多个已停泊车辆一一对应。检测到用户对该一个或多个虚拟车辆模型中任一虚拟车辆模型的第一操作,响应于用户的第一操作,显示第二用户界面。
也即是,在第一用户界面显示一个或多个虚拟车辆模型之后,用户对该一个或多个虚拟车辆模型中任一虚拟车辆模型触发第一操作。此时,将该任一虚拟车辆模型所对应的已停泊车辆确定为参考车辆,并显示第二用户界面。
在第一用户界面显示一个或多个虚拟车辆模型的情况下,用户能够直接对虚拟车辆模型进行操作,而无需单独提供一个车辆选择区域,也就无需用户确认车辆选择区域中的哪个操作标识对应哪个已停泊车辆,从而能够提高确定参考车辆的效率。
作为一种示例,第一用户界面中还包括目标车辆对应的虚拟车辆模型,而且第一用户界面中还包括用于指示目标车辆的图标。这样,在用户从该一个或多个已停泊车辆中选择参考车辆时,能够为用户提供参考,便于用户区分目标车辆和已停泊车辆。同理,第二用户界面中也可以包括目标车辆对应的虚拟车辆模型,而且第二用户界面中还可以包括用于指示目标车辆的图标。
需要说明的是,对于第一用户界面和第二用户界面中的任一用户界面来说,在该用户界面包括已停泊车辆对应的虚拟车辆模型和目标车辆对应的虚拟车辆模型的情况下,能够按照上述方式,在该用户界面中显示用于指示目标车辆的图标,以区分已停泊车辆和目标车辆。当然,还能够通过其他的方式来区分已停泊车辆和目标车辆。比如,目标车辆对应的虚拟车辆模型与已停泊车辆对应的虚拟车辆模型不同。另外,上述的虚拟车辆模型可以为二维的虚拟车辆模型,也可以为三维的虚拟车辆模型。
例如,第一用户界面如图4所示,图4中包括3个虚拟车辆模型,这3个虚拟车辆模型中包括目标车辆对应的虚拟车辆模型以及2个已停泊车辆对应的虚拟车辆模型,而且目标车辆对应的虚拟车辆模型尾部显示有一个三角形的图标。用户通过点击这2个已停泊车辆对应 的虚拟车辆模型中的任一虚拟车辆模型,以将该任一虚拟车辆模型所对应的已停泊车辆确定为参考车辆。
其中,用户的第一操作包括用户在第一用户界面上的触摸、敲击和滑动动作中的任意一种。例如,以上述虚拟车辆模型为例,用户通过触摸虚拟车辆模型来选择参考车辆,或者敲击虚拟车辆模型来选择参考车辆,又或者滑动虚拟车辆模型来选择参考车辆。又例如,以上述操作标识为例,用户通过触摸操作标识来选择参考车辆,或者敲击操作标识来选择参考车辆,又或者滑动操作标识来选择参考车辆,本申请实施例对此不做限定。
在一种可能的实现方式中,第二用户界面只包括参考车辆,也即是第二用户界面不包括其他已停泊车辆。或者,在另一种可能的实现方式中,第二用户界面不仅包括参考车辆,还包括其他已停泊车辆。示例地,第二用户界面还包括第二车辆,第二车辆是一个或多个已停泊车辆中除参考车辆以外的任意车辆,参考车辆的显示方式与第二车辆的显示方式不同。例如,参考车辆的显示颜色与其他已停泊车辆的显示颜色不同,或者,参考车辆的轮廓线与其他已停泊车辆的轮廓线的粗细不同,又或者,参考车辆的背景纹理与其他已停泊车辆的背景纹理不同等等,总之,用户能从视觉上区分第二用户界面包括的参考车辆和其他已停泊车辆。
在一些实施例中,第二用户界面中还包括指示标识,该指示标识用于指示参考车辆。
比如,请参考图5,图5是本申请实施例提供的一种第二用户界面的示意图。在图5中,第二用户界面包括参考车辆和第二车辆,参考车辆位于右侧,第二车辆位于左侧,而且图5中参考车辆的轮廓线与第二车辆的轮廓线的粗细不同,同时,第二用户界面中参考车辆的周围还包括“L”型的指示标识,用于指示参考车辆。
由于该一个或多个已停泊车辆中每个已停泊车辆的停泊位置和停泊方向的确定方式相同,因此,接下来以该一个或多个已停泊车辆中的任意车辆为例进行介绍。为了便于描述,将该任意车辆称为第一车辆,也即是,对于该一个或多个已停泊车辆中的第一车辆,电子设备将环视图像输入至车辆检测模型,以得到第一车辆的停泊位置和局部图像,局部图像为第一车辆在环视图像中所处的图像区域。之后,按照下述步骤(1)-(2),确定第一车辆的停泊方向。
(1)将第一车辆的停泊信息输入至关键信息检测模型,以确定第一车辆的多个关键点的属性信息和多条关键线的属性信息。
在一些实施例中,第一车辆的停泊信息为第一车辆的局部图像,将第一车辆的局部图像输入至关键信息检测模型,以得到关键信息检测模型输出的第一车辆的多个关键点的属性信息和多条关键线的属性信息。
关键点的属性信息包括关键点位置、关键点类别和关键点可见性中的至少一个,关键点可见性用于指示对应的关键点是否被遮挡。关键线的属性信息包括关键线中心点位置、关键线可见性、关键线倾斜度和关键线长度中的至少一个,关键线可见性用于指示对应的关键线是否被遮挡。
其中,关键点包括四个车轮中心点、车身中心点、车标中心点以及两个尾灯中心点等等。第一车辆的关键线包括车辆前后安装车牌位置处的竖直中心线、车标与车顶的竖直中心线等等。这些关键点和关键线能够进行多种组合来确定第一车辆的停泊方向。
作为一种示例,将第一车辆的四个车轮中心点以及车身中心点作为第一车辆的关键点,将第一车辆前后安装车牌位置处的竖直中心线作为第一车辆的关键线。作为另一种示例,将 第一车辆的车标中心点以及两个尾灯中心点作为第一车辆的关键点,将第一车辆的车标与车顶的竖直中心线作为第一车辆的关键线。
例如,如图6所示,关键点包括车辆的四个车轮中心点以及车身中心点,关键线包括车辆前后安装车牌位置处的竖直中心线。
可选地,关键信息检测模型还能够输出第一车辆的车身尺寸、车型款式、颜色、车灯状态以及车门状态等基本属性信息。
需要说明的是,车辆检测模型和关键信息检测模型为事先训练得到,而且本申请实施例对这两个模型的结构不做限定,这两个模型的结构为神经网络或者其他的结构均可。
(2)将第一车辆的多个关键点的属性信息和多条关键线的属性信息输入至姿态估计模型,以确定第一车辆的停泊方向。
在一些实施例中,将第一车辆的多个关键点的属性信息和多条关键线的属性信息输入至姿态估计模型,以得到姿态估计模型输出的第一车辆在局部图像的图像坐标系中的停泊方向,将第一车辆在局部图像的图像坐标系中的停泊方向转换到目标车辆的车身坐标系下,以得到第一车辆的停泊方向。
基于上文描述,停泊方向包括车头朝向和车身方向,为了更精确地描述车身姿态,除了确定车身方向之外,还需要确定车身角度。此时,姿态估计模型输出的停泊方向不仅包括车头朝向和车身方向,还包括车身角度。由于车载环视相机的外参会对车身角度产生一定影响,所以,在姿态估计模型输出车身角度之后,还需要在该车身角度的基础上进行外参补偿。即,确定补偿角度,该补偿角度为车载环视相机的焦点与第一车辆的中心点的连线与车载环视相机的成像平面之间的夹角。将姿态估计模型输出的车身角度与补偿角度相加,得到第一车辆在局部图像的图像坐标系中的车身角度。之后,将第一车辆在局部图像的图像坐标系中的停泊方向转换到目标车辆的车身坐标系下,以得到第一车辆的车身方向。
需要说明的是,姿态估计模型为事先训练得到,而且本申请实施例对姿态估计模型的结构不做限定,姿态估计模型的结构为神经网络或者其他的结构均可。另外,本申请实施例通过关键点和关键线的属性信息来确定车辆的停泊方向,而对于同一车辆,通过仿真数据、CAD等方式能够较为容易地获取到不同的关键点和关键线的属性信息,进而能够获取大量的样本,通过这些样本对关键信息检测模型和姿态估计模型进行训练,能够提高确定车辆停泊方向的准确性和鲁棒性。
在第一车辆因遮挡导致确定出的多个关键点和多条关键线的稳定性下降的情况下,为了提升确定出的第一车辆的停泊方向的准确性,本申请实施例能够基于多个环视图像来确定第一车辆的停泊方向。也即是,将多个环视图像进行融合来确定第一车辆的停泊方向。
作为一种示例,从多个环视图像中分别确定出第一车辆对应的局部图像,以得到多个局部图像。将该多个局部图像分别输入至关键信息检测模型,以得到该每个局部图像中第一车辆的多个关键点的属性信息和多条关键线的属性信息。之后,将每个局部图像中第一车辆的多个关键点的属性信息和多条关键线的属性信息分别输入至姿态估计模型,以得到姿态估计模型输出的第一车辆的多个初始停泊方向,该多个初始停泊方向与该多个局部图像一一对应。将该多个初始停泊方向取平均值,以得到第一车辆的停泊方向。或者,确定该多个初始停泊方向分别对应的置信度,将该多个初始停泊方向和对应的置信度进行加权求和,以得到第一车辆的停泊方向。
作为另一种示例,从多个环视图像中分别确定出第一车辆对应的局部图像,以得到多个局部图像。将该多个局部图像输入至关键信息检测模型,以得到第一车辆的多个关键点的属性信息和多条关键线的属性信息。之后,将第一车辆的多个关键点的属性信息和多条关键线的属性信息输入至姿态估计模型,以得到姿态估计模型输出的第一车辆的停泊方向。
示例地,请参考图7,图7是本申请实施例提供的一种确定第一车辆的停泊方向的流程图。在图7中,将第一车辆的停泊信息输入至关键信息检测模型,以得到第一车辆的多个关键点的属性信息和多条关键线的属性信息。然后,将多个关键点的属性信息和多条关键线的属性信息输入至姿态估计模型,以得到第一车辆在局部图像的图像坐标系中的停泊方向,该停泊方向包括车头朝向、车身方向和车身角度。在车身角度的基础上增加补偿角度,将第一车辆在局部图像的图像坐标系中的车头朝向、车身方向和补偿后的车身角度转换到目标车辆的车身坐标系下,以得到第一车辆的停泊方向。
上述确定第一车辆的停泊位置和停泊方向的方式仅为一种示例,实际应用中,还可能通过其他的方式来确定第一车辆的停泊位置和停泊方向。比如,请参考图8,将环视图像作为车辆检测与朝向估计模型的输入,以得到第一车辆的停泊位置和第一车辆在环视图像的图像坐标系下的停泊方向,该停泊方向包括车头朝向、车身方向和车身角度。在车身角度的基础上增加补偿角度,将第一车辆在环视图像的图像坐标系中的车头朝向、车身方向和补偿后的车身角度转换到目标车辆的车身坐标系下,以得到第一车辆的停泊方向。
第二种实现方式,基于该一个或多个已停泊车辆的停泊信息,确定该一个或多个已停泊车辆的停泊位置和停泊方向。基于该一个或多个已停泊车辆的停泊位置和停泊方向,采用预先设置的模型确定参考车辆。
作为一种示例,基于该一个或多个已停泊车辆的停泊信息,确定该一个或多个已停泊车辆的停泊位置和停泊方向。基于该一个或多个已停泊车辆的停泊位置确定可泊车空间,可泊车空间为泊车区域中除该一个或多个已停泊车辆的停泊位置之外的区域。确定目标车辆与可泊车空间之间的距离,以及确定目标车辆的行进方向,将目标车辆与可泊车空间之间的距离、目标车辆的行进方向,以及该一个或多个已停泊车辆的停泊方向输入至预先设置的模型,以确定参考车辆。
需要说明的是,预先设置的模型是事先基于多个样本车辆训练得到的,示例地,通过强化学习的方式训练得到。另外,基于该一个或多个已停泊车辆的停泊信息,确定该一个或多个已停泊车辆的停泊位置和停泊方向的实现过程参考上述第一种实现方式中的相关描述,此处不再赘述。而且,基于该一个或多个已停泊车辆的停泊位置确定可泊车空间的实现方式将在下文进行描述,此处先不展开论述。
本申请实施例不仅能够通过预先设置的模型来确定参考车辆,还能够按照泊车姿态规则来确定参考车辆。也即是,基于该一个或多个已停泊车辆的停泊信息,确定该一个或多个已停泊车辆的停泊位置和停泊方向。基于该一个或多个已停泊车辆的停泊位置和停泊方向,采用泊车姿态规则确定参考车辆。
其中,泊车姿态规则是指按照车身方向的优先级确定参考车辆的规则。示例地,车身方向的优先级从高到低的顺序依次为竖直方向、水平方向、斜向。也即是,若该一个或多个已停泊车辆中存在车身方向为竖直方向的已停泊车辆,则将该车身方向为竖直方向的已停泊车辆确定为参考车辆。若该一个或多个已停泊车辆中不存在车身方向为竖直方向的已停泊车辆, 但存在车身方向为水平方向的已停泊车辆,则将车身方向为水平方向的已停泊车辆确定为参考车辆。若该一个或多个已停泊车辆中不存在车身方向为竖直方向的已停泊车辆且不存在车身方向为水平方向的已停泊车辆,但存在车身方向为斜向的已停泊车辆,则将车身方向为斜向的已停泊车辆确定为参考车辆。
需要说明的是,车身方向的优先级顺序不限于为上述顺序,在其他的示例中还可能为其他的顺序,本申请实施例对此不作限定。另外,在同时存在多个满足条件的已停泊车辆时,随机选择一个车辆作为参考车辆,或者按照其他的规则选择一个车辆作为参考车辆,比如,选择距离目标车辆最近的一个车辆作为参考车辆,本申请实施例对此不作限定。
在上述第二种实现方式中,当确定出一个或多个已停泊车辆的停泊位置和停泊方向之后,能够采用预先设置的模型或者泊车姿态规则,自动地确定参考车辆,这样能够避免用户手动选择参考车辆,从而能够简化用户的操作。
可选地,泊车姿态规则还可以是指按照车身方向出现的次数确定参考车辆的规则。示例地,基于该一个或多个已停泊车辆的车身方向,统计各个车身方向的出现次数,从出现次数最多的车身方向的已停泊车辆中选择一个车辆作为参考车辆。
步骤203:基于参考车辆的停泊信息确定目标虚拟车位,目标虚拟车位用于指示目标车辆的停泊位置和停泊方向。
目标车辆的停泊方向包括目标车辆的车头朝向和车身方向。目标车辆的车身方向为目标车辆的车身相对于参照物的方向,该参照物包括道路基线、参考车辆或者其他参照物。例如,以目标车辆的车身方向为目标车辆的车身相对于参考车辆的方向为例,目标车辆的车身方向与参考车辆的车身平行、垂直、倾斜。
基于参考车辆的停泊信息,按照下述步骤(1)-(3),确定目标虚拟车位。
(1)基于参考车辆的停泊信息,确定参考车辆的停泊方向。
确定参考车辆的停泊方向的实现过程参考上述步骤202中确定第一车辆的停泊方向的处理过程,此处不再赘述。
(2)基于该一个或多个已停泊车辆的停泊信息,确定可泊车空间。
在一些实施例中,提取环视图像中的地面区域,提取地面区域包括的多个像素点中每个像素点的特征,基于该多个像素点的特征,对该多个像素点进行聚类,以得到多个区域,从该多个区域中确定泊车区域,并基于该一个或多个已停泊车辆的停泊信息,确定泊车区域中可泊车空间。
作为一种示例,将环视图像作为地面分割模型的输入,以得到地面分割模型输出的地面区域。将地面区域作为特征提取模型的输入,以得到特征提取模型输出的该地面区域包括的多个像素点的特征。基于该多个像素点的特征,对该多个像素点进行聚类,以得到多个区域。确定该多个区域中每个区域对应的区域特征,基于该多个区域的区域特征确定该多个区域中每个区域的语义类别。若该多个区域中存在语义类别为泊车类别的区域,则将语义类别为泊车类别的区域确定为泊车区域,并基于该一个或多个已停泊车辆的停泊信息,从该泊车区域中确定可泊车空间。若该多个区域中不存在语义类别为泊车类别的区域,则基于多个区域的区域特征和语义类别,从该多个区域中确定泊车区域,以及基于该一个或多个已停泊车辆的停泊信息,从泊车区域中确定可泊车空间。
需要说明的是,地面分割模型和特征提取模型为事先训练得到,而且本申请实施例对这 两个模型的结构不做限定,这两个模型的结构为神经网络或者其他的结构均可。地面区域包括泊车区域、道路区域、井盖区域以及草坪区域等等。基于该多个像素点的特征对该多个像素点进行聚类是指:将特征之间的距离相近的像素点划分为一个区域,从而得到多个区域。
其中,确定该多个区域中每个区域对应的区域特征的实现方式包括多种。比如,对于其中一个区域来说,对该区域包括的所有像素点的特征取平均值,以得到该区域对应的区域特征。或者,将该区域包括的所有像素点的特征进行融合,以得到该区域对应的区域特征,比如将该区域包括的所有像素点的特征组成一个矩阵,将该矩阵作为该区域的区域特征。
基于该多个区域的区域特征确定该多个区域中每个区域的语义类别的实现过程包括:对于该多个区域中每个区域来说,确定该区域对应的区域特征与存储的区域特征与语义类别之间的对应关系中的各个区域特征之间的距离,将与该区域对应的区域特征之间的距离最近的区域特征所对应的语义类别,确定为该区域的语义类别。
需要说明的是,为了提升聚类效果以及提升每个区域对应的语义类别的准确性,本申请实施例能够通过多个环视图像进行多帧融合。即,对于该多个环视图像来说,均按照上述方法确定出每个环视图像中的地面区域,以得到多个地面区域。然后获取该多个地面区域中相互重叠的区域,之后,按照上述方法提取重叠区域的每个像素点的特征,进而进行聚类来确定可泊车空间。
示例地,请参考图9,图9是本申请实施例提供的一种确定每个区域的语义类别的流程图。在图9中,将环视图像作为地面分割模型的输入,以得到地面分割模型输出的地面区域。将地面区域作为特征提取模型的输入,以得到特征提取模型输出的多个像素点的特征。通过对该多个像素点进行特征聚类,以得到多个区域。获取该多个区域中每个区域的区域特征,将该多个区域中每个区域对应的区域特征与存储的区域特征与语义类别之间的对应关系中的区域特征进行特征匹配,来确定该多个区域中每个区域的语义类别。
当然,上述确定每个区域的语义类别仅为一种示例,实际应用中,还可能通过其他的方式来确定每个区域的语义类别。比如,请参考图10,将环视图像作为语义分割模型的输入,以得到语义分割模型输出的环视图像中每个区域的语义类别。其中,环视图像包括的区域中包括上述地面区域所分割出的多个区域。
在一些实施例中,基于该多个区域的区域特征和语义类别,从该多个区域中确定泊车区域,并基于该一个或多个已停泊车辆的停泊信息,从泊车区域中确定可泊车空间的实现过程包括:基于该多个区域的语义类别,从该多个区域中选择语义类别为道路类别但区域特征距离道路特征最远的一个区域,基于该一个或多个已停泊车辆的停泊信息,确定选择出的区域中的可泊车空间,若选择出的区域中的可泊车空间足以泊入目标车辆,则将选择出的区域确定为泊车区域,若选择出的区域中可泊车空间不足以泊入目标车辆,则基于该多个区域的语义类别,从剩余的语义类别为道路类别的区域中选择区域特征距离道路特征最远的一个区域,返回确定选择出的区域中的可泊车空间的步骤,直至确定出足以泊入目标车辆的泊车区域为止。若不存在足以泊入目标车辆的泊车区域,则显示提示信息,该提示信息用于提示用户确认周围环境。
基于该一个或多个已停泊车辆的停泊信息,从泊车区域中确定可泊车空间的实现过程包括:对泊车区域中该一个或多个已停泊车辆的位置进行遮掩,以得到第一可泊区域,对第一可泊区域中的障碍物进行检测,并将第一可泊区域中障碍物所占用的区域进行遮掩,以得到 第二可泊区域。对第二可泊区域进行规则的四边形处理,以得到第三可泊区域,将第三可泊区域所在的空间确定为可泊车空间。
也即是,基于该一个或多个已停泊车辆的停泊信息,将该一个或多个已停泊车辆投影至泊车区域,并将该一个或多个已停泊车辆在泊车区域中的投影区域进行遮掩,以得到第一可泊区域。对第一可泊区域中的障碍物进行检测,将障碍物投影至第一可泊区域,并将障碍物在第一可泊区域中的投影区域进行遮掩,以得到第二可泊区域。获取第二可泊区域中每个角点的位置信息,将任意四个角点划分为一组,从而得到多组角点,基于该多组角点中每组角点包括的各个角点的位置信息,确定该多组角点中每组角点对应的四边形面积,选择面积最大的四边形区域作为第三可泊区域,进而将第三可泊区域所在的空间确定为可泊车空间。
(3)基于参考车辆的停泊方向以及可泊车空间,确定目标虚拟车位。
基于参考车辆的停泊方向和可泊车空间,确定目标虚拟车位的方式包括多种,接下来将分别进行说明。
第一种实现方式,基于参考车辆的停泊方向以及可泊车空间,确定多个候选虚拟车位,响应于用户的第二操作,从该多个候选虚拟车位中确定目标虚拟车位。
作为一种示例,基于参考车辆的停泊方向以及可泊车空间,确定多个候选虚拟车位,显示第四用户界面,第四用户界面包括该多个候选虚拟车位。响应于用户的第二操作,显示第三用户界面,第三用户界面包括目标虚拟车位。
也即是,基于参考车辆的停泊方向以及可泊车空间,确定出多个候选虚拟车位之后,显示第四用户界面。用户在第四用户界面中触发第二操作,以从该多个候选虚拟车位中确定出目标虚拟车位。
在一些实施例中,第三用户界面还显示有可泊车空间,目标虚拟车位位于可泊车空间内。
其中,基于参考车辆的停泊方向以及可泊车空间,确定多个候选虚拟车位的实现过程包括:将参考车辆的停泊方向作为目标车辆的停泊方向,在可泊车空间中确定多个候选虚拟车位,以使该多个候选虚拟车位所指示的停泊方向为目标车辆的停泊方向。也即是,将参考车辆的停泊方向直接作为目标车辆的停泊方向,进而在可泊车空间中确定多个候选虚拟车位。
当然,用户可能对参考车辆的停泊方向不满意,所以电子设备显示第二用户界面,第二用户界面包括参考车辆,第二用户界面还能够指示参考车辆的停泊方向,将参考车辆的停泊方向作为参考停泊方向。此时,响应于用户的第三操作,第三操作用于对参考停泊方向进行调整,将调整后的停泊方向确定为目标车辆的停泊方向。基于目标车辆的停泊方向,在可泊车空间中确定多个候选虚拟车位,以使该多个候选虚拟车位所指示的停泊方向为目标车辆的停泊方向。
在可泊车空间中确定多个候选虚拟车位的实现方式包括多种,例如,从可泊车空间中靠近参考车辆的一侧开始并列排布多个候选虚拟车位。或者,在可泊车空间中从右向左并列排布多个候选虚拟车位。又或者,在可泊车空间中从左向右并列排布多个候选虚拟车位。
用户的第二操作包括用户在第四用户界面上的触摸、敲击和滑动动作中的任意一种。例如,用户通过触摸候选虚拟车位来确定目标虚拟车位,或者敲击候选虚拟车位来确定目标虚拟车位,又或者滑动候选虚拟车位来确定目标虚拟车位,本申请实施例对此不做限定。用户的第三操作包括用户在第二用户界面上的点击、拖拽动作中的任意一种。例如,用户通过点击候选虚拟车位来调整停泊方向,或者拖拽候选虚拟车位来调整停泊方向,本申请实施例对 此不做限定。
例如,电子设备显示的第二用户界面如图11中左图所示,第二用户界面包括三个车辆,分别为目标车辆、参考车辆以及其他已停泊车辆,目标车辆的车尾附近显示有三角形图标,参考车辆的周围显示有“L”型指示标识,而且第二用户界面中还包括参考车辆的停泊方向。在用户对参考车辆的停泊方向不满意的情况下,将参考车辆的停泊方向作为参考停泊方向,用户能够按照图11的左图中箭头的方向,对参考停泊方向进行调整,将调整后的停泊方向确定为目标车辆的停泊方向。之后,按照上述方法确定出多个候选虚拟车位,进而从该多个候选虚拟车位中选择出的目标虚拟车位(带有1的虚拟车辆模型)的停泊方向如图11中右图所示。
在确定出目标车辆的停泊方向之后,电子设备能够按照多种方式确定多个候选虚拟车位。如图12和图13所示,以用户对参考车辆的停泊方向满意为例,图12的左图中包括三个车辆,分别为目标车辆、参考车辆以及其他已停泊车辆,目标车辆的车尾附近显示有三角形图标,参考车辆的周围显示有“L”型指示标识,从可泊车空间中靠近参考车辆的一侧开始并列排布5个虚拟车辆模型,5个虚拟车辆模型与5个候选虚拟车位一一对应,从而得到图12的右图所示。或者,在图13中,从可泊车空间中靠近参考车辆的一侧开始并列排布5个候选虚拟车位(黑色矩形框)。
在本申请实施例中,在确定出的参考车辆不同的情况下,最终确定出的目标虚拟车位也会不同。比如,如图14所示,在确定出的参考车辆为右上角的车辆(车辆周围显示有“L”型指示标识)时,最终确定出的目标虚拟车位(黑色矩形框)如图14所示。又比如,如图15所示,在确定出的参考车辆为右下角的车辆(车辆周围显示有“L”型指示标识)时,最终确定出的目标虚拟车位(黑色矩形框)如图15所示。
可选地,在目标车辆周围存在多个已停泊车辆的情况下,该多个已停泊车辆可能分布于目标车辆的行驶道路的一侧,也可能分布于目标车辆的行驶道路的两侧。在该多个已停泊车辆分布于目标车辆的行驶道路的两侧时,目标虚拟车辆是基于目标车辆的行驶道路两侧的参考车辆的停泊信息确定的。也即是,可以按照上述方法,从目标车辆的行驶道路两侧的已停泊车辆中分别确定一个参考车辆。这样,按照上述方法从该行驶道路的两侧分别确定出一个可泊车空间,进而基于目标车辆的行驶道路两侧的参考车辆,在该行驶道路两侧的可泊车空间中分别确定出多个候选虚拟车位,从而确定按照上述方法目标虚拟车位。
例如,如图16和图17所示,图16的左图中位于中间的车辆为目标车辆,其他两个车辆为参考车辆(车辆周围显示有“L”型指示标识),这两个参考车辆分别为目标车辆的行驶道路左侧的参考车辆,以及目标车辆的行驶道路右侧的参考车辆。从行驶道路右侧的可泊车空间中靠近行驶道路右侧的参考车辆的一侧开始并列排布2个虚拟车辆模型,这2个虚拟车辆模型与2个候选虚拟车位一一对应,以及从行驶道路左侧的可泊车空间中靠近行驶道路左侧的参考车辆的一侧开始并列排布3个虚拟车辆模型,这3个虚拟车辆模型与3个候选虚拟车位一一对应,这5个虚拟车辆模型如图16的右图所示。或者,在图17中,从行驶道路右侧的可泊车空间中靠近行驶道路右侧的参考车辆的一侧开始并列排布2个候选虚拟车位(黑色矩形框),从行驶道路左侧的可泊车空间中靠近行驶道路左侧的参考车辆的一侧开始并列排布3个候选虚拟车位。
可选地,不管是通过多个虚拟车辆模型来表征多个候选虚拟车位,还是通过黑色矩形框或其他显示方式来表征多个候选虚拟车位。在确定出多个候选虚拟车位之后,还可以显示该 多个候选虚拟车位对应的车头朝向。比如,如图18所示,在每个候选虚拟车位中显示一个箭头,该箭头用于指示车头朝向。
第二种实现方式,基于参考车辆的停泊方向以及可泊车空间,确定候选虚拟车位。如果候选虚拟车位的数量为一个,则直接将该候选虚拟车位作为目标虚拟车位。如果该候选虚拟车位的数量为多个,则从多个候选虚拟车位中选择一个候选虚拟车位作为目标虚拟车位。
其中,基于参考车辆的停泊方向以及可泊车空间,确定候选虚拟车位的方式参考上述第一种实现方式,此处不再赘述。
另外,从多个候选虚拟车位中选择一个候选虚拟车位作为目标虚拟车位的实现方式包括多种。例如,从该多个候选虚拟车位中选择一个候选虚拟车位作为目标虚拟车位推荐给用户。或者,将该多个候选虚拟车位推荐给用户,由用户选择一个候选虚拟车位作为目标虚拟车位。其中,从该多个候选虚拟车位中选择一个候选虚拟车位作为目标虚拟车位推荐给用户时,可以结合目标车辆当前的位置与候选虚拟车位之间的距离,从该多个候选虚拟车位中选择距离最近的一个候选虚拟车位,作为目标虚拟车位推荐给用户,当然还可以通过其他的方式选择一个候选虚拟车位推荐给用户。
作为一种示例,从多个候选虚拟车位中选择一个候选虚拟车位之后,显示第五用户界面,第五用户界面包括推荐的虚拟车位。响应于用户的第四操作,显示第三用户界面,第四操作用于指示用户确认将推荐的虚拟车位作为目标虚拟车位。
作为另一种示例,从多个候选虚拟车位中选择一个候选虚拟车位之后,显示第五用户界面,第五用户界面包括推荐的虚拟车位。响应于用户的第五操作,显示第四用户界面,第四用户界面包括该多个候选虚拟车位,第五操作用于指示用户对推荐的虚拟车位的停泊位置不满意。响应于用户的第二操作,显示第三用户界面,第二操作用于从该多个候选虚拟车位中选择目标虚拟车位。
也即是,从多个候选虚拟车位中选择一个候选虚拟车位,作为目标虚拟车位推荐给用户时,用户可能会直接接受推荐的虚拟车位,即将推荐的虚拟车位作为目标虚拟车位。当然,用户可能对推荐的虚拟车位的停泊位置不满意,此时,需要将该多个候选虚拟车位全部推荐给用户,由用户选择一个候选虚拟车位,作为目标虚拟车位。
需要说明的是,用户的第四操作包括用户在第五用户界面上的触摸、敲击动作中的任意一种。例如,第五用户界面上包括“确定”按钮,用户通过触摸“确定”按钮来将推荐的虚拟车位确定为目标虚拟车位,本申请实施例对此不做限定。用户的第五操作包括用户在第五用户界面上的触摸、敲击动作中的任意一种。例如,第五用户界面上包括“取消”按钮,用户通过触摸“取消”按钮来指示对当前推荐的虚拟车位不满意,本申请实施例对此不做限定。
可选地,在用户对推荐的虚拟车位不满意的情况下,电子设备显示的第四用户界面中还包括用于指示推荐的虚拟车位的图标。
例如,电子设备从该多个候选虚拟车位中选择一个候选虚拟车位作为目标虚拟车位推荐给用户之后,如果用户对推荐的虚拟车位不满意,此时,电子设备可以显示如图19的左图所示的第四用户界面,且第四用户界面中通过五角星的图标对推荐的虚拟车位进行了标记。此时,如图19的右图所示,用户可以从第四用户界面中重新选择一个候选虚拟车位作为目标虚拟车位。
第三种实现方式,第二用户界面中还包括可泊车空间,响应于用户的第六操作,第六操 作用于从可泊车空间中选择一个位置作为目标车辆的停泊位置。基于参考车辆的停泊方向和目标车辆的停泊位置,确定目标虚拟车位。
也即是,用户在可泊车空间中选择一个位置作为目标车辆的停泊位置,然后基于参考车辆的停泊方向和目标车辆的停泊位置,确定目标虚拟车位。
用户的第六操作包括用户在第二用户界面上的触摸、敲击、拖拽动作中的任意一种。例如,用户在第二用户界面的可泊车空间中触摸来选择目标车辆的停泊位置,或者用户在第二用户界面的可泊车空间中敲击来选择目标车辆的停泊位置,又或者用户在第二用户界面中通过拖拽其他标志物来选择目标车辆的停泊位置,该标志物为参考车辆、或者其他车辆等。
基于参考车辆的停泊方向和目标车辆的停泊位置,确定目标虚拟车位时,可直接将参考车辆的停泊方向作为目标车辆的停泊方向,在可泊车空间的目标车辆的停泊位置处确定目标虚拟车位,以使目标虚拟车位所指示的停泊方向为目标车辆的停泊方向。当然,用户可能对参考车辆的停泊方向不满意,所以电子设备显示第二用户界面,第二用户界面包括参考车辆,第二用户界面还能够指示参考车辆的停泊方向,将参考车辆的停泊方向作为参考停泊方向。此时,响应于用户的第三操作,第三操作用于对参考停泊方向进行调整,将调整后的停泊方向确定为目标车辆的停泊方向。基于目标车辆的停泊方向,在可泊车空间的目标车辆的停泊位置处确定目标虚拟车位,以使目标虚拟车位所指示的停泊方向为目标车辆的停泊方向。
例如,如图20所示的第二用户界面,图20中的阴影区域代表可泊车空间,在图20的左图中,用户先通过敲击第二用户界面中的参考车辆,然后再敲击可泊车空间,从而电子设备能够在图20的右图中显示目标车辆的停泊位置(图20中的黑色矩形框)。或者,如图21所示的第二用户界面,图21中的阴影区域代表可泊车空间,在图21的左图中,用户将参考车辆拖拽至可泊车空间中,从而电子设备能够在图21的右图中显示目标车辆的停泊位置(图21中的黑色矩形框)。
上述内容是在目标车辆的周围存在已停泊车辆的情况下,确定目标虚拟车位的实现过程。在某些情况下,目标车辆的周围可能不存在已停泊车辆,此时,电子设备对可泊车空间进行三维空间测量,以确定可泊车空间的深度。之后,基于可泊车空间的深度与目标车辆的车身长度之间的比值,确定目标车辆的停泊方向。并在可泊车空间中确定目标车辆的停泊位置,进而确定目标虚拟车位。
作为一种示例,若可泊车空间的深度与目标车辆的车身长度之间的比值大于第一比例阈值,则确定目标车辆的车身方向相对于道路基线为竖直方向。若可泊车空间的深度与目标车辆的车身长度之间的比值小于第二比例阈值,则确定目标车辆的车身方向相对于道路基线为水平方向。若可泊车空间的深度与目标车辆车身长度之间的比值小于第一比例阈值但是大于第二比例阈值,则确定目标车辆的车身方向相对于道路基线为斜向,其斜向角度为可泊车空间的深度与目标车辆车身长度的反正弦值。
需要说明的是,第一比例阈值和第二比例阈值为事先设置的,而且能够按照不同的需求来调整。例如,第一比例阈值为0.9,第二比例阈值为0.7。
另外,在可泊车空间中确定目标车辆的停泊位置时,可参考上述方法,由用户在可泊车空间中选择一个位置作为目标车辆的停泊位置。当然,电子设备也可参考上述方法,按照目标车辆的停泊方向,在可泊车空间中确定多个候选虚拟车位,由用户选择一个候选虚拟车位作为目标虚拟车位。其中,用户在可泊车空间中选择一个位置作为目标车辆的停泊位置的方 式,以及用户从多个候选虚拟车位中选择一个候选虚拟车位作为目标虚拟车位的方式参考前文描述,此处不再赘述。
在本申请实施例中,从目标车辆周围的一个或多个已停泊车辆中选择一个车辆作为参考车辆,基于参考车辆的停泊方向来确定目标虚拟车位,这样能够保证目标车辆基于该虚拟车位自动泊车后与选择的参考车辆形成一致的排列,从而提高了泊车的整齐度和便捷性。而且,通过对第一车辆的多个关键点和多条关键线的属性信息进行检测,这样,能够精确得到第一车辆的停泊方向。此外,通过地面分割模型对地面区域进行分割之后,通过地面区域包括的像素点的特征聚类能够确定出多个区域,按照该多个区域的语义类别能够判别出不常见的泊车区域,从而提升泊车区域识别的效果。
图22是本申请实施例提供的一种用于辅助泊车的显示方法的流程图,该方法可以应用于上述电子设备。请参考图22,该方法包括如下步骤。
步骤2201:显示第一用户界面,第一用户界面用于显示目标车辆周围的环境信息,目标车辆为待停泊的车辆,目标车辆周围的环境信息包括一个或多个已停泊车辆的停泊信息。
在一些实施例中,第一用户界面包括一个或多个操作标识,该一个或多个操作标识与该一个或多个已停泊车辆一一对应。
第一用户界面显示的环境信息为相机或雷达获取的图像信息。或者,第一用户界面显示的环境信息为根据传感器获取的信息生成的虚拟环境信息。
其中,步骤2201的相关内容请参考步骤202中的相关描述,此处不再赘述。
步骤2202:响应于用户的第一操作,显示第二用户界面,第二用户界面包括参考车辆,参考车辆为一个或多个已停泊车辆中的一个。
用户的第一操作包括用户在第一用户界面上的触摸、敲击和滑动动作中的任意一种。
在一些实施例中,第二用户界面还包括第二车辆,第二车辆是一个或多个已停泊车辆中除参考车辆以外的任意车辆,参考车辆的显示方式与第二车辆的显示方式不同。例如,参考车辆的显示颜色与第二车辆的显示颜色不同,或者,参考车辆的轮廓线与第二车辆的轮廓线不同,又或者,参考车辆的背景纹理与第二车辆的背景纹理不同。
需要说明的是,第二用户界面还包括指示标识,指示标识用于指示参考车辆。
其中,步骤2202的相关内容请参考步骤202中的相关描述,此处不再赘述。
步骤2203:显示第三用户界面,第三用户界面包括目标虚拟车位,目标虚拟车位用于指示目标车辆的停泊位置和停泊方向。
在一些实施例中,显示第四用户界面,第四用户界面包括多个候选虚拟车位。响应于用户的第二操作,显示第三用户界面,目标虚拟车位为该多个候选虚拟车位中的一个。
在一些实施例中,第三用户界面还包括用于指示目标车辆的图标。
在一些实施例中,第三用户界面还显示有可泊车空间,目标虚拟车位位于可泊车空间内。
其中,步骤2203的相关内容请参考步骤203中的相关描述,此处不再赘述。
在本申请实施例中,显示目标车辆周围的环境信息,使得用户通过操作虚拟车辆模型或者操作标识,确定参考车辆,通过参考车辆的停泊信息确定目标虚拟车位,这样可以保证目标车辆基于该虚拟车位自动泊车后与参考车辆形成一致的排列。此外,若生成多个候选虚拟车位时,可以显示多个候选虚拟车位,使得用户选择满意的目标虚拟车位,从而满足用户的 个性化需求。
图23是本申请实施例提供的一种虚拟车位确定装置的结构示意图,该装置可以由软件、硬件或者两者的结合实现成为电子设备的部分或者全部,该电子设备可以为上述图1所示的电子设备。参见图23,该装置包括:环境信息获取模块2301、参考车辆确定模块2302和虚拟车位确定模块2303。
环境信息获取模块2301,用于获取目标车辆周围的环境信息,目标车辆为待停泊的车辆,环境信息包括一个或多个已停泊车辆的停泊信息;
参考车辆确定模块2302,用于基于一个或多个已停泊车辆的停泊信息确定参考车辆,参考车辆为一个或多个已停泊车辆中的一个;
虚拟车位确定模块2303,用于基于参考车辆的停泊信息确定目标虚拟车位,目标虚拟车位用于指示目标车辆的停泊位置和停泊方向。
可选地,参考车辆确定模块2302包括:
第一界面显示子模块,用于显示第一用户界面,第一用户界面包括一个或多个已停泊车辆的停泊位置和停泊方向,一个或多个已停泊车辆的停泊位置和停泊方向根据一个或多个已停泊车辆的停泊信息确定;
第二界面显示子模块,用于响应于用户的第一操作,显示第二用户界面,第二用户界面包括参考车辆,第一操作用于指示从一个或多个已停泊车辆中选择参考车辆。
可选地,参考车辆确定模块2302包括:
停泊信息确定子模块,用于基于一个或多个已停泊车辆的停泊信息,确定一个或多个已停泊车辆的停泊位置和停泊方向;
参考车辆确定子模块,用于基于一个或多个已停泊车辆的停泊位置和停泊方向,采用预先设置的模型确定参考车辆。
可选地,虚拟车位确定模块2303包括:
停泊方向确定子模块,用于基于参考车辆的停泊信息,确定参考车辆的停泊方向;
泊车空间确定子模块,用于基于一个或多个已停泊车辆的停泊信息,确定可泊车空间;
虚拟车位确定子模块,用于基于参考车辆的停泊方向以及可泊车空间,确定目标虚拟车位。
可选地,虚拟车位确定子模块具体用于:
基于参考车辆的停泊方向以及可泊车空间,确定多个候选虚拟车位;
响应于用户的第二操作,从多个候选虚拟车位中确定目标虚拟车位。
可选地,对于一个或多个已停泊车辆中的第一车辆,第一车辆是一个或多个已停泊车辆中的任意车辆,停泊信息确定子模块具体用于:
将第一车辆的停泊信息输入至关键信息检测模型,以确定第一车辆的多个关键点的属性信息和多条关键线的属性信息;
将第一车辆的多个关键点的属性信息和多条关键线的属性信息输入至姿态估计模型,以确定第一车辆的停泊方向。
可选地,关键点的属性信息包括关键点位置、关键点类别和关键点可见性中的至少一个,关键点可见性用于指示对应的关键点是否被遮挡;关键线的属性信息包括关键线中心点位置、 关键线可见性、关键线倾斜度和关键线长度中的至少一个,关键线可见性用于指示对应的关键线是否被遮挡。
可选地,目标车辆周围的环境信息包括视觉数据和雷达数据中的至少一种。
可选地,目标车辆的停泊方向包括目标车辆的车头朝向和车身方向,目标车辆的车身方向为目标车辆的车身相对于参考车辆的车身的方向。
可选地,目标车辆的车身方向包括与参考车辆的车身平行、垂直、倾斜。
可选地,环境信息包括多个已停泊车辆的停泊信息,多个已停泊车辆分布于目标车辆的行驶道路的两侧,目标虚拟车位是基于行驶道路两侧的参考车辆的停泊信息确定。
在本申请实施例中,从目标车辆周围的一个或多个已停泊车辆中选择一个车辆作为参考车辆,基于参考车辆的停泊方向来确定目标虚拟车位,这样能够保证目标车辆基于该虚拟车位自动泊车后与选择的参考车辆形成一致的排列,从而提高了泊车的整齐度和便捷性。而且,通过对第一车辆的多个关键点和多条关键线的属性信息进行检测,这样,能够精确得到第一车辆的停泊方向。此外,通过地面分割模型对地面区域进行分割之后,通过地面区域包括的像素点的特征聚类能够确定出多个区域,按照该多个区域的语义类别能够判别出不常见的泊车区域,从而提升泊车区域识别的效果。
需要说明的是:上述实施例提供的虚拟车位确定装置在确定虚拟车位时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的虚拟车位确定装置与虚拟车位确定方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图24是本申请实施例提供的一种用于辅助泊车的显示装置的结构示意图,该装置可以由软件、硬件或者两者的结合实现成为电子设备的部分或者全部,该电子设备可以为上述图1所示的电子设备。参见图24,该装置包括:第一界面显示模块2401、第二界面显示模块2402和第三界面显示模块2403。
第一界面显示模块2401,用于显示第一用户界面,第一用户界面用于显示目标车辆周围的环境信息,目标车辆为待停泊的车辆,环境信息包括一个或多个已停泊车辆的停泊信息;
第二界面显示模块2402,用于响应于用户的第一操作,显示第二用户界面,第二用户界面包括参考车辆,参考车辆为一个或多个已停泊车辆中的一个;
第三界面显示模块2403,用于显示第三用户界面,第三用户界面包括目标虚拟车位,目标虚拟车位用于指示目标车辆的停泊位置和停泊方向。
可选地,第二用户界面还包括第二车辆,第二车辆是一个或多个已停泊车辆中除参考车辆以外的任意车辆;
参考车辆的显示方式与第二车辆的显示方式不同。
可选地,第二用户界面还包括指示标识,指示标识用于指示参考车辆。
可选地,第三界面显示模块具体用于:
显示第四用户界面,第四用户界面包括多个候选虚拟车位;
响应于用户的第二操作,显示第三用户界面,目标虚拟车位为多个候选虚拟车位中的一个。
可选地,第三用户界面还显示有可泊车空间,目标虚拟车位位于可泊车空间内。
可选地,第一用户界面包括一个或多个操作标识,一个或多个操作标识与一个或多个已停泊车辆一一对应。
可选地,第一用户界面显示的环境信息为相机或雷达获取的图像信息。
可选地,第一用户界面显示的环境信息为根据传感器获取的信息生成的虚拟环境信息。
可选地,第三用户界面还包括用于指示目标车辆的图标。
可选地,用户的第一操作包括用户在第一用户界面上的触摸、敲击和滑动动作中的任意一种。
在本申请实施例中,显示目标车辆周围的环境信息,使得用户通过操作虚拟车辆模型或者操作标识,确定参考车辆,通过参考车辆的停泊信息确定目标虚拟车位,这样可以保证目标车辆基于该虚拟车位自动泊车后与参考车辆形成一致的排列。此外,若生成多个候选虚拟车位时,可以显示多个候选虚拟车位,使得用户选择满意的目标虚拟车位,从而满足用户的个性化需求。
请参考图25,图25是根据本申请实施例示出的一种电子设备的结构示意图,该电子设备可以是图1中所示的电子设备101。该电子设备包括至少一个处理器2501、通信总线2502、存储器2503以及至少一个通信接口2504。
处理器2501可以是一个通用中央处理器(central processing unit,CPU)、网络处理器(network processor,NP)、微处理器、或者可以是一个或多个用于实现本申请方案的集成电路,例如,专用集成电路(application-specific integrated circuit,ASIC),可编程逻辑器件(programmable logic device,PLD)或其组合。上述PLD可以是复杂可编程逻辑器件(complex programmable logic device,CPLD)、现场可编程逻辑门阵列(field-programmable gate array,FPGA)、通用阵列逻辑(generic array logic,GAL)或其任意组合。
通信总线2502用于在上述组件之间传送信息。通信总线2502可以分为地址总线、数据总线、控制总线等。为便于表示,图中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线。
存储器2503可以是只读存储器(read-only memory,ROM),也可以是随机存取存储器(random access memory,RAM),也可以是电可擦可编程只读存储器(electrically erasable programmable read-only Memory,EEPROM)、光盘(包括只读光盘(compact disc read-only memory,CD-ROM)、压缩光盘、激光盘、数字通用光盘、蓝光光盘等)、磁盘存储介质或者其它磁存储设备,或者是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其它介质,但不限于此。存储器2503可以是独立存在,并通过通信总线2502与处理器2501相连接。存储器2503也可以和处理器2501集成在一起。
通信接口2504使用任何收发器一类的装置,用于与其它设备或通信网络通信。通信接口2504包括有线通信接口,还可以包括无线通信接口。其中,有线通信接口例如可以为以太网接口。以太网接口可以是光接口、电接口或其组合。无线通信接口可以为无线局域网(wireless local area networks,WLAN)接口、蜂窝网络通信接口或其组合等。
在具体实现中,作为一种实施例,处理器2501可以包括一个或多个CPU,如图25中所示的CPU0和CPU1。
在具体实现中,作为一种实施例,电子设备可以包括多个处理器,如图25中所示的处理器2501和处理器2505。这些处理器中的每一个可以是一个单核处理器,也可以是一个多核处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(如计算机程序指令)的处理核。
在具体实现中,作为一种实施例,电子设备还可以包括输出设备2506和输入设备2507。输出设备2506和处理器2501通信,可以以多种方式来显示信息。例如,输出设备2506可以是液晶显示器(liquid crystal display,LCD)、发光二级管(light emitting diode,LED)显示设备、阴极射线管(cathode ray tube,CRT)显示设备或投影仪(projector)等。输入设备2507和处理器2501通信,可以以多种方式接收用户的输入。例如,输入设备2507可以是鼠标、键盘、触摸屏设备或传感设备等。
在一些实施例中,存储器2503用于存储执行本申请方案的程序代码2510,处理器2501可以执行存储器2503中存储的程序代码2510。该程序代码2510中可以包括一个或多个软件模块,该电子设备可以通过处理器2501以及存储器2503中的程序代码2510,来实现上文实施例提供的方法。
请参考图26,图26是本申请实施例提供的一种终端设备的结构示意图。该终端设备可以为上述的电子设备。该终端设备包括传感器单元1110、计算单元1120、存储单元1140和交互单元1130。
传感器单元1110,通常包括视觉传感器(如相机)、深度传感器、IMU、激光传感器等;
计算单元1120,通常包括CPU、GPU、缓存、寄存器等,主要用于运行操作系统;
存储单元1140,主要包括内存和外部存储,主要用于用户本地和临时数据的读写等;
交互单元1130,主要包括显示屏、触摸板、扬声器、麦克风等,主要用于和用户进行交互,获取用于输入,并实施呈现算法效果等。比如,可以显示弱视训练图像,并投屏到弱视训练设备上。
为便于理解,下面将对本申请实施例提供的一种终端设备100的结构进行示例说明。参见图27,图27是本申请实施例提供的一种终端设备的结构示意图。
如图27所示,终端设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。其中传感器模块180可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L等。
可以理解的是,本申请实施例示意的结构并不构成对终端设备100的具体限定。在本申请另一些实施例中,终端设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器 (application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。处理器110可以执行计算机程序,以实现本申请实施例中任一种弱视训练方法。
其中,控制器可以是终端设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用,避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I1C)接口,集成电路内置音频(inter-integrated circuit sound,I1S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本申请实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对终端设备100的结构限定。在本申请另一些实施例中,终端设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。
终端设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
在一些可行的实施方式中,终端设备100可以使用无线通信功能和其他设备通信。例如,终端设备100可以和第二电子设备通信,终端设备100与第二电子设备建立投屏连接,终端设备100输出投屏数据至第二电子设备等。其中,终端设备100输出的投屏数据可以为音视频数据。
天线1和天线2用于发射和接收电磁波信号。终端设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在终端设备100上的包括1G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制 解调处理器调制后的信号放大,经天线2转为电磁波辐射出去。在一些实施例中,移动通信模块150的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏194显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。在另一些实施例中,调制解调处理器可以独立于处理器110,与移动通信模块150或其他功能模块设置在同一个器件中。
无线通信模块160可以提供应用在终端设备100上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块160经由天线1接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器110。无线通信模块160还可以从处理器110接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,终端设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得终端设备100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
终端设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,终端设备100可以包括1个或N个显示屏194,N为大于1的正整数。
在一些可行的实施方式中,显示屏194可用于显示终端设备100的系统输出的各个界面。
终端设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,终端设备100可以包括1个或N个摄像头193,N为大于1的正整数。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。
视频编解码器用于对数字视频压缩或解压缩。终端设备100可以支持一种或多种视频编解码器。这样,终端设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG1,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现终端设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展终端设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行终端设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如本申请实施例中的室内定位方法等)等。存储数据区可存储终端设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
终端设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。在一些可行的实施方式中,音频模块170可用于播放视频对应的声音。例如,显示屏194显示视频播放画面时,音频模块170输出视频播放的声音。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。
耳机接口170D用于连接有线耳机。耳机接口170D可以是USB接口130,也可以是3.5mm的开放移动电子设备平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器180A可以设置于显示屏194。陀螺仪传感器180B可以用于确定终端设备100的运动姿态。气压传感器180C用于测量气压。
加速度传感器180E可检测终端设备100在各个方向上(包括三轴或六轴)加速度的大小。当终端设备100静止时可检测出重力的大小及方向。还可以用于识别终端设备姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。
环境光传感器180L用于感知环境光亮度。
指纹传感器180H用于采集指纹。
温度传感器180J用于检测温度。
触摸传感器180K,也称“触控面板”。触摸传感器180K可以设置于显示屏194,由触摸传感器180K与显示屏194组成触摸屏,也称“触控屏”。触摸传感器180K用于检测作用于其上或附近的触摸操作。触摸传感器可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型。可以通过显示屏194提供与触摸操作相关的视觉输出。在另一些实施例中,触摸传感器180K也可以设置于终端设备100的表面,与显示屏194所处的位置不同。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。终端设备100可以接收按键输入,产生与终端设备100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接SIM卡。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意结合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络或其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如:同轴电缆、光纤、数据用户线(digital subscriber line,DSL))或无线(例如:红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质,或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如:软盘、硬盘、磁带)、光介质(例如:数字通用光盘(digital versatile disc,DVD))或半导体介质(例如:固态硬盘(solid state disk,SSD))等。值得注意的是,本申请实施例提到的计算机可读存储介质可以为非易失性存储介质,换句话说,可以是非瞬 时性存储介质。
应当理解的是,本文提及的“多个”是指两个或两个以上。在本申请实施例的描述中,除非另有说明,“/”表示或的意思,例如,A/B可以表示A或B;本文中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,为了便于清楚描述本申请实施例的技术方案,在本申请实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
需要说明的是,本申请实施例所涉及的信息(包括但不限于用户设备信息、用户个人信息等)、数据(包括但不限于用于分析的数据、存储的数据、展示的数据等)以及信号,均为经用户授权或者经过各方充分授权的,且相关数据的收集、使用和处理需要遵守相关国家和地区的相关法律法规和标准。例如,本申请实施例中涉及到的目标车辆周围的环境信息是在充分授权的情况下获取的。
以上所述为本申请提供的实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (45)

  1. 一种虚拟车位确定方法,其特征在于,所述方法包括:
    获取目标车辆周围的环境信息,所述目标车辆为待停泊的车辆,所述环境信息包括一个或多个已停泊车辆的停泊信息;
    基于所述一个或多个已停泊车辆的停泊信息确定参考车辆,所述参考车辆为所述一个或多个已停泊车辆中的一个;
    基于所述参考车辆的停泊信息确定目标虚拟车位,所述目标虚拟车位用于指示所述目标车辆的停泊位置和停泊方向。
  2. 如权利要求1所述的方法,其特征在于,所述基于所述一个或多个已停泊车辆的停泊信息确定参考车辆,包括:
    显示第一用户界面,所述第一用户界面包括所述一个或多个已停泊车辆的停泊位置和停泊方向,所述一个或多个已停泊车辆的停泊位置和停泊方向根据所述一个或多个已停泊车辆的停泊信息确定;
    响应于用户的第一操作,显示第二用户界面,所述第二用户界面包括所述参考车辆,所述第一操作用于指示从所述一个或多个已停泊车辆中选择所述参考车辆。
  3. 如权利要求1所述的方法,其特征在于,所述基于所述一个或多个已停泊车辆的停泊信息确定参考车辆,包括:
    基于所述一个或多个已停泊车辆的停泊信息,确定所述一个或多个已停泊车辆的停泊位置和停泊方向;
    基于所述一个或多个已停泊车辆的停泊位置和停泊方向,采用预先设置的模型确定所述参考车辆。
  4. 如权利要求1-3任一项所述的方法,其特征在于,所述基于所述参考车辆的停泊信息确定目标虚拟车位,包括:
    基于所述参考车辆的停泊信息,确定所述参考车辆的停泊方向;
    基于所述一个或多个已停泊车辆的停泊信息,确定可泊车空间;
    基于所述参考车辆的停泊方向以及所述可泊车空间,确定所述目标虚拟车位。
  5. 如权利要求4所述的方法,其特征在于,所述基于所述参考车辆的停泊方向以及所述可泊车空间,确定所述目标虚拟车位,包括:
    基于所述参考车辆的停泊方向以及所述可泊车空间,确定多个候选虚拟车位;
    响应于用户的第二操作,从所述多个候选虚拟车位中确定所述目标虚拟车位。
  6. 如权利要求2或3所述的方法,其特征在于,对于所述一个或多个已停泊车辆中的第一车辆,所述第一车辆是所述一个或多个已停泊车辆中的任意车辆,基于所述第一车辆的停 泊信息确定所述第一车辆的停泊方向,包括:
    将所述第一车辆的停泊信息输入至关键信息检测模型,以确定所述第一车辆的多个关键点的属性信息和多条关键线的属性信息;
    将所述第一车辆的多个关键点的属性信息和多条关键线的属性信息输入至姿态估计模型,以确定所述第一车辆的停泊方向。
  7. 如权利要求6所述的方法,其特征在于,所述关键点的属性信息包括关键点位置、关键点类别和关键点可见性中的至少一个,所述关键点可见性用于指示对应的关键点是否被遮挡;所述关键线的属性信息包括关键线中心点位置、关键线可见性、关键线倾斜度和关键线长度中的至少一个,所述关键线可见性用于指示对应的关键线是否被遮挡。
  8. 如权利要求1-7任一项所述的方法,其特征在于,所述目标车辆周围的环境信息包括视觉数据和雷达数据中的至少一种。
  9. 如权利要求1-8任一项所述的方法,其特征在于,所述目标车辆的停泊方向包括所述目标车辆的车头朝向和车身方向,所述目标车辆的车身方向为所述目标车辆的车身相对于所述参考车辆的车身的方向。
  10. 如权利要求9所述的方法,其特征在于,所述目标车辆的车身方向包括与所述参考车辆的车身平行、垂直、倾斜。
  11. 如权利要求1-10任一项所述的方法,其特征在于,所述环境信息包括多个已停泊车辆的停泊信息,所述多个已停泊车辆分布于所述目标车辆的行驶道路的两侧,所述目标虚拟车位是基于所述行驶道路两侧的所述参考车辆的停泊信息确定。
  12. 一种用于辅助泊车的显示方法,其特征在于,所述方法包括:
    显示第一用户界面,所述第一用户界面用于显示目标车辆周围的环境信息,所述目标车辆为待停泊的车辆,所述环境信息包括一个或多个已停泊车辆的停泊信息;
    响应于用户的第一操作,显示第二用户界面,所述第二用户界面包括参考车辆,所述参考车辆为所述一个或多个已停泊车辆中的一个;
    显示第三用户界面,所述第三用户界面包括目标虚拟车位,所述目标虚拟车位用于指示所述目标车辆的停泊位置和停泊方向。
  13. 如权利要求12所述的方法,其特征在于,所述第二用户界面还包括第二车辆,所述第二车辆是所述一个或多个已停泊车辆中除所述参考车辆以外的任意车辆;
    所述参考车辆的显示方式与所述第二车辆的显示方式不同。
  14. 如权利要求12或13所述的方法,其特征在于,所述第二用户界面还包括指示标识,所述指示标识用于指示所述参考车辆。
  15. 如权利要求12所述的方法,其特征在于,所述显示第三用户界面,包括:
    显示第四用户界面,所述第四用户界面包括多个候选虚拟车位;
    响应于所述用户的第二操作,显示所述第三用户界面,所述目标虚拟车位为所述多个候选虚拟车位中的一个。
  16. 如权利要求12-15任一项所述的方法,其特征在于,所述第三用户界面还显示有可泊车空间,所述目标虚拟车位位于所述可泊车空间内。
  17. 如权利要求12-16任一项所述的方法,其特征在于,所述第一用户界面包括一个或多个操作标识,所述一个或多个操作标识与所述一个或多个已停泊车辆一一对应。
  18. 如权利要求12-17任一项所述的方法,其特征在于,所述第一用户界面显示的所述环境信息为相机或雷达获取的图像信息。
  19. 如权利要求12-17任一项所述的方法,其特征在于,所述第一用户界面显示的所述环境信息为根据传感器获取的信息生成的虚拟环境信息。
  20. 如权利要求12-19任一项所述的方法,其特征在于,所述第三用户界面还包括用于指示所述目标车辆的图标。
  21. 如权利要求12-20任一项所述的方法,其特征在于,所述用户的第一操作包括所述用户在所述第一用户界面上的触摸、敲击和滑动动作中的任意一种。
  22. 一种虚拟车位确定装置,其特征在于,所述装置包括:
    环境信息获取模块,用于获取目标车辆周围的环境信息,所述目标车辆为待停泊的车辆,所述环境信息包括一个或多个已停泊车辆的停泊信息;
    参考车辆确定模块,用于基于所述一个或多个已停泊车辆的停泊信息确定参考车辆,所述参考车辆为所述一个或多个已停泊车辆中的一个;
    虚拟车位确定模块,用于基于所述参考车辆的停泊信息确定目标虚拟车位,所述目标虚拟车位用于指示所述目标车辆的停泊位置和停泊方向。
  23. 如权利要求22所述的装置,其特征在于,所述参考车辆确定模块包括:
    第一界面显示子模块,用于显示第一用户界面,所述第一用户界面包括所述一个或多个已停泊车辆的停泊位置和停泊方向,所述一个或多个已停泊车辆的停泊位置和停泊方向根据所述一个或多个已停泊车辆的停泊信息确定;
    第二界面显示子模块,用于响应于用户的第一操作,显示第二用户界面,所述第二用户界面包括所述参考车辆,所述第一操作用于指示从所述一个或多个已停泊车辆中选择所述参考车辆。
  24. 如权利要求22所述的装置,其特征在于,所述参考车辆确定模块包括:
    停泊信息确定子模块,用于基于所述一个或多个已停泊车辆的停泊信息,确定所述一个或多个已停泊车辆的停泊位置和停泊方向;
    参考车辆确定子模块,用于基于所述一个或多个已停泊车辆的停泊位置和停泊方向,采用预先设置的模型确定所述参考车辆。
  25. 如权利要求22-24任一项所述的装置,其特征在于,所述虚拟车位确定模块包括:
    停泊方向确定子模块,用于基于所述参考车辆的停泊信息,确定所述参考车辆的停泊方向;
    泊车空间确定子模块,用于基于所述一个或多个已停泊车辆的停泊信息,确定可泊车空间;
    虚拟车位确定子模块,用于基于所述参考车辆的停泊方向以及所述可泊车空间,确定所述目标虚拟车位。
  26. 如权利要求25所述的装置,其特征在于,所述虚拟车位确定子模块具体用于:
    基于所述参考车辆的停泊方向以及所述可泊车空间,确定多个候选虚拟车位;
    响应于用户的第二操作,从所述多个候选虚拟车位中确定所述目标虚拟车位。
  27. 如权利要求23或24所述的装置,其特征在于,对于所述一个或多个已停泊车辆中的第一车辆,所述第一车辆是所述一个或多个已停泊车辆中的任意车辆,停泊信息确定子模块具体用于:
    将所述第一车辆的停泊信息输入至关键信息检测模型,以确定所述第一车辆的多个关键点的属性信息和多条关键线的属性信息;
    将所述第一车辆的多个关键点的属性信息和多条关键线的属性信息输入至姿态估计模型,以确定所述第一车辆的停泊方向。
  28. 如权利要求27所述的装置,其特征在于,所述关键点的属性信息包括关键点位置、关键点类别和关键点可见性中的至少一个,所述关键点可见性用于指示对应的关键点是否被遮挡;所述关键线的属性信息包括关键线中心点位置、关键线可见性、关键线倾斜度和关键线长度中的至少一个,所述关键线可见性用于指示对应的关键线是否被遮挡。
  29. 如权利要求22-28任一项所述的装置,其特征在于,所述目标车辆周围的环境信息包括视觉数据和雷达数据中的至少一种。
  30. 如权利要求22-29任一项所述的装置,其特征在于,所述目标车辆的停泊方向包括所述目标车辆的车头朝向和车身方向,所述目标车辆的车身方向为所述目标车辆的车身相对于所述参考车辆的车身的方向。
  31. 如权利要求30所述的装置,其特征在于,所述目标车辆的车身方向包括与所述参考车辆的车身平行、垂直、倾斜。
  32. 如权利要求22-31任一项所述的装置,其特征在于,所述环境信息包括多个已停泊车辆的停泊信息,所述多个已停泊车辆分布于所述目标车辆的行驶道路的两侧,所述目标虚拟车位是基于所述行驶道路两侧的所述参考车辆的停泊信息确定。
  33. 一种用于辅助泊车的显示装置,其特征在于,所述装置包括:
    第一界面显示模块,用于显示第一用户界面,所述第一用户界面用于显示目标车辆周围的环境信息,所述目标车辆为待停泊的车辆,所述环境信息包括一个或多个已停泊车辆的停泊信息;
    第二界面显示模块,用于响应于用户的第一操作,显示第二用户界面,所述第二用户界面包括参考车辆,所述参考车辆为所述一个或多个已停泊车辆中的一个;
    第三界面显示模块,用于显示第三用户界面,所述第三用户界面包括目标虚拟车位,所述目标虚拟车位用于指示所述目标车辆的停泊位置和停泊方向。
  34. 如权利要求33所述的装置,其特征在于,所述第二用户界面还包括第二车辆,所述第二车辆是所述一个或多个已停泊车辆中除所述参考车辆以外的任意车辆;
    所述参考车辆的显示方式与所述第二车辆的显示方式不同。
  35. 如权利要求33或34所述的装置,其特征在于,所述第二用户界面还包括指示标识,所述指示标识用于指示所述参考车辆。
  36. 如权利要求33所述的装置,其特征在于,所述第三界面显示模块具体用于:
    显示第四用户界面,所述第四用户界面包括多个候选虚拟车位;
    响应于所述用户的第二操作,显示所述第三用户界面,所述目标虚拟车位为所述多个候选虚拟车位中的一个。
  37. 如权利要求33-36任一项所述的装置,其特征在于,所述第三用户界面还显示有可泊车空间,所述目标虚拟车位位于所述可泊车空间内。
  38. 如权利要求33-37任一项所述的装置,其特征在于,所述第一用户界面包括一个或多个操作标识,所述一个或多个操作标识与所述一个或多个已停泊车辆一一对应。
  39. 如权利要求33-38任一项所述的装置,其特征在于,所述第一用户界面显示的所述环境信息为相机或雷达获取的图像信息。
  40. 如权利要求33-38任一项所述的装置,其特征在于,所述第一用户界面显示的所述环境信息为根据传感器获取的信息生成的虚拟环境信息。
  41. 如权利要求33-40任一项所述的装置,其特征在于,所述第三用户界面还包括用于指示所述目标车辆的图标。
  42. 如权利要求33-41任一项所述的装置,其特征在于,所述用户的第一操作包括所述用户在所述第一用户界面上的触摸、敲击和滑动动作中的任意一种。
  43. 一种电子设备,其特征在于,所述设备包括存储器和处理器,所述存储器用于存储计算机程序,所述处理器被配置为用于执行所述存储器中存储的计算机程序,以实现权利要求1-21任一项所述方法的步骤。
  44. 一种计算机可读存储介质,其特征在于,所述存储介质内存储有指令,当所述指令在所述计算机上运行时,使得所述计算机执行权利要求1-21任一项所述方法的步骤。
  45. 一种计算机程序产品,其特征在于,所述计算机程序产品包括指令,当所述指令在所述计算机上运行时,使得所述计算机执行权利要求1-21任一项所述方法的步骤。
PCT/CN2022/127434 2021-10-28 2022-10-25 虚拟车位确定方法、显示方法、装置、设备、介质及程序 WO2023072093A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22885970.8A EP4414965A1 (en) 2021-10-28 2022-10-25 Virtual parking space determination method, display method and apparatus, device, medium, and program
US18/645,689 US20240296737A1 (en) 2021-10-28 2024-04-25 Method for determining virtual parking slot, display method, apparatus, device, medium, and program

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111266615.5A CN116052461A (zh) 2021-10-28 2021-10-28 虚拟车位确定方法、显示方法、装置、设备、介质及程序
CN202111266615.5 2021-10-28

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/645,689 Continuation US20240296737A1 (en) 2021-10-28 2024-04-25 Method for determining virtual parking slot, display method, apparatus, device, medium, and program

Publications (1)

Publication Number Publication Date
WO2023072093A1 true WO2023072093A1 (zh) 2023-05-04

Family

ID=86115099

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/127434 WO2023072093A1 (zh) 2021-10-28 2022-10-25 虚拟车位确定方法、显示方法、装置、设备、介质及程序

Country Status (4)

Country Link
US (1) US20240296737A1 (zh)
EP (1) EP4414965A1 (zh)
CN (1) CN116052461A (zh)
WO (1) WO2023072093A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612458A (zh) * 2023-05-30 2023-08-18 易飒(广州)智能科技有限公司 基于深度学习的泊车路径确定方法与系统
CN117831340A (zh) * 2024-01-11 2024-04-05 深圳点点电工网络科技有限公司 停车位生成方法、控制装置及计算机可读存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116803787B (zh) * 2023-07-03 2024-08-16 上海洛轲智能科技有限公司 自动行车及泊车的方法、装置、电子设备和存储介质

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1737500A (zh) * 2004-08-20 2006-02-22 爱信精机株式会社 用于车辆的停车辅助装置及停车辅助方法
JP2012030797A (ja) * 2011-11-14 2012-02-16 Nissan Motor Co Ltd 駐車支援装置及び駐車支援方法
CN108269421A (zh) * 2017-12-29 2018-07-10 东南大学 基于视频感知的路内停车泊位实时检测方法、装置及系统
CN109138561A (zh) * 2017-06-16 2019-01-04 纵目科技(上海)股份有限公司 设置虚拟停车位自动泊车的方法和系统
CN109624969A (zh) * 2018-12-24 2019-04-16 北京新能源汽车股份有限公司 一种自动泊车控制方法、装置及电动汽车
CN112666951A (zh) * 2020-12-25 2021-04-16 广州橙行智动汽车科技有限公司 一种泊车交互方法、装置和车辆
CN112824183A (zh) * 2019-11-20 2021-05-21 华为技术有限公司 一种自动泊车交互方法及装置
CN112896151A (zh) * 2021-02-25 2021-06-04 中国第一汽车股份有限公司 一种泊车方式的确定方法、装置、电子设备及存储介质
CN113269998A (zh) * 2021-05-19 2021-08-17 广州小鹏汽车科技有限公司 一种基于自动驾驶中泊车功能的学习方法和装置
CN113276844A (zh) * 2021-06-25 2021-08-20 广州小鹏汽车科技有限公司 停车场泊车自学习方法、电子设备、车辆及存储介质

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1737500A (zh) * 2004-08-20 2006-02-22 爱信精机株式会社 用于车辆的停车辅助装置及停车辅助方法
JP2012030797A (ja) * 2011-11-14 2012-02-16 Nissan Motor Co Ltd 駐車支援装置及び駐車支援方法
CN109138561A (zh) * 2017-06-16 2019-01-04 纵目科技(上海)股份有限公司 设置虚拟停车位自动泊车的方法和系统
CN108269421A (zh) * 2017-12-29 2018-07-10 东南大学 基于视频感知的路内停车泊位实时检测方法、装置及系统
CN109624969A (zh) * 2018-12-24 2019-04-16 北京新能源汽车股份有限公司 一种自动泊车控制方法、装置及电动汽车
CN112824183A (zh) * 2019-11-20 2021-05-21 华为技术有限公司 一种自动泊车交互方法及装置
CN112666951A (zh) * 2020-12-25 2021-04-16 广州橙行智动汽车科技有限公司 一种泊车交互方法、装置和车辆
CN112896151A (zh) * 2021-02-25 2021-06-04 中国第一汽车股份有限公司 一种泊车方式的确定方法、装置、电子设备及存储介质
CN113269998A (zh) * 2021-05-19 2021-08-17 广州小鹏汽车科技有限公司 一种基于自动驾驶中泊车功能的学习方法和装置
CN113276844A (zh) * 2021-06-25 2021-08-20 广州小鹏汽车科技有限公司 停车场泊车自学习方法、电子设备、车辆及存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612458A (zh) * 2023-05-30 2023-08-18 易飒(广州)智能科技有限公司 基于深度学习的泊车路径确定方法与系统
CN116612458B (zh) * 2023-05-30 2024-06-04 易飒(广州)智能科技有限公司 基于深度学习的泊车路径确定方法与系统
CN117831340A (zh) * 2024-01-11 2024-04-05 深圳点点电工网络科技有限公司 停车位生成方法、控制装置及计算机可读存储介质

Also Published As

Publication number Publication date
EP4414965A1 (en) 2024-08-14
CN116052461A (zh) 2023-05-02
US20240296737A1 (en) 2024-09-05

Similar Documents

Publication Publication Date Title
CN111126182B (zh) 车道线检测方法、装置、电子设备及存储介质
WO2023072093A1 (zh) 虚拟车位确定方法、显示方法、装置、设备、介质及程序
CN110070056B (zh) 图像处理方法、装置、存储介质及设备
CN110865388B (zh) 摄像机与激光雷达的联合标定方法、装置及存储介质
CN111854780B (zh) 车辆导航方法、装置、车辆、电子设备及存储介质
CN111192341A (zh) 生成高精地图的方法、装置、自动驾驶设备及存储介质
CN111114554A (zh) 行驶轨迹预测方法、装置、终端及存储介质
CN105654778A (zh) 电子装置、电子装置的控制方法
CN110490186B (zh) 车牌识别方法、装置及存储介质
CN109581358B (zh) 障碍物的识别方法、装置及存储介质
CN112991439B (zh) 定位目标物体的方法、装置、电子设备及介质
CN111538009B (zh) 雷达点的标记方法和装置
CN113379705B (zh) 图像处理方法、装置、计算机设备及存储介质
CN112269939B (zh) 自动驾驶的场景搜索方法、装置、终端、服务器及介质
CN111709993B (zh) 物体的位姿信息确定方法、装置、终端及存储介质
CN116853240A (zh) 障碍物的预警方法、装置、设备及存储介质
CN111179628B (zh) 自动驾驶车辆的定位方法、装置、电子设备及存储介质
CN111444749B (zh) 路面导向标志的识别方法、装置及存储介质
CN115965935A (zh) 目标检测方法、装置、电子设备、存储介质和程序产品
CN111754564A (zh) 视频展示方法、装置、设备及存储介质
CN111619556B (zh) 汽车的避障控制方法、装置及存储介质
CN112037545B (zh) 信息管理方法、装置、计算机设备及存储介质
CN111738034B (zh) 车道线的检测方法和装置
CN111563402B (zh) 车牌识别方法、装置、终端及存储介质
CN113920222A (zh) 获取地图建图数据的方法、装置、设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22885970

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022885970

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2022885970

Country of ref document: EP

Effective date: 20240510

NENP Non-entry into the national phase

Ref country code: DE