CN114639263B - Vehicle parking position identification method and device - Google Patents

Vehicle parking position identification method and device Download PDF

Info

Publication number
CN114639263B
CN114639263B CN202011478089.4A CN202011478089A CN114639263B CN 114639263 B CN114639263 B CN 114639263B CN 202011478089 A CN202011478089 A CN 202011478089A CN 114639263 B CN114639263 B CN 114639263B
Authority
CN
China
Prior art keywords
vehicle
floor
driving scene
distance
recognition result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011478089.4A
Other languages
Chinese (zh)
Other versions
CN114639263A (en
Inventor
李峰
吴彬彬
钟豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SAIC Motor Corp Ltd
Original Assignee
SAIC Motor Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAIC Motor Corp Ltd filed Critical SAIC Motor Corp Ltd
Priority to CN202011478089.4A priority Critical patent/CN114639263B/en
Publication of CN114639263A publication Critical patent/CN114639263A/en
Application granted granted Critical
Publication of CN114639263B publication Critical patent/CN114639263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • G08G1/145Traffic control systems for road vehicles indicating individual free spaces in parking areas where the indication depends on the parking areas
    • G08G1/146Traffic control systems for road vehicles indicating individual free spaces in parking areas where the indication depends on the parking areas where the parking area is a limited parking space, e.g. parking garage, restricted space
    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07CTIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
    • G07C5/00Registering or indicating the working of vehicles
    • G07C5/08Registering or indicating performance data other than driving, working, idle, or waiting time, with or without registering driving, working, idle or waiting time
    • G07C5/0841Registering performance data
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/14Traffic control systems for road vehicles indicating individual free spaces in parking areas
    • G08G1/145Traffic control systems for road vehicles indicating individual free spaces in parking areas where the indication depends on the parking areas
    • G08G1/148Management of a network of parking areas
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/025Services making use of location information using location based information parameters
    • H04W4/027Services making use of location information using location based information parameters using movement velocity, acceleration information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/33Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/006Locating users or terminals or network equipment for network management purposes, e.g. mobility management with additional information processing, e.g. for direction or speed determination

Abstract

The application discloses a vehicle parking position identification method and a device, and the method comprises the following steps: and acquiring a driving scene recognition result of the vehicle. The driving scene recognition result includes a first driving scene and a second driving scene. And acquiring a floor identification result of the vehicle. And determining the floor where the actual target parking lot entrance of the vehicle is located according to the driving scene recognition result and the floor recognition result. And acquiring longitude and latitude information of the vehicle, wherein the longitude and latitude information of the vehicle comprises the parking longitude and latitude information of the vehicle. And determining track data before the vehicle stops according to the floor where the actual target parking lot entrance of the vehicle is located, the floor recognition result when the vehicle runs and the longitude and latitude information of the vehicle. Determining parking position information of the vehicle, wherein the parking position information of the vehicle comprises a parking floor of the vehicle, parking longitude and latitude information of the vehicle and track data before the vehicle parks. By the method, a user can accurately and quickly find the vehicle, and the vehicle searching efficiency is improved.

Description

Vehicle parking position identification method and device
Technical Field
The application relates to the field of vehicle positioning, in particular to a method and a device for identifying a vehicle parking position.
Background
With the spread of vehicles, more and more parking lots have a multi-layered structure. When a user searches a car in a multi-layer parking lot, the user often forgets parking floors or parking positions, so that the car searching efficiency is low, and inconvenience is brought to the user.
Therefore, accurate parking position positioning is realized, and the improvement of vehicle searching efficiency is particularly important.
Disclosure of Invention
In order to solve the technical problem, the application provides a vehicle parking position identification method and device, which are used for determining accurate vehicle parking position information and improving vehicle searching efficiency.
In order to achieve the above object, the embodiments of the present application provide the following technical solutions:
the embodiment of the application provides a vehicle parking position identification method, which comprises the following steps:
acquiring a driving scene recognition result of a vehicle; the driving scene recognition result comprises a first driving scene and a second driving scene;
acquiring a floor recognition result of the vehicle;
determining the floor where the actual target parking lot entrance of the vehicle is located according to the driving scene recognition result and the floor recognition result;
acquiring longitude and latitude information of the vehicle; the longitude and latitude information of the vehicle comprises parking longitude and latitude information of the vehicle;
determining track data before the vehicle stops according to the floor where the actual target parking lot entrance of the vehicle is located, the floor recognition result when the vehicle runs and the longitude and latitude information of the vehicle;
determining parking information of the vehicle; the parking information of the vehicle comprises a parking floor of the vehicle, parking longitude and latitude information of the vehicle and track data before the vehicle parks; and the parking floor of the vehicle is obtained according to the floor recognition result of the vehicle.
Optionally, the obtaining a driving scenario recognition result of the vehicle includes:
acquiring steering wheel angle information of a vehicle in a first time period of the vehicle and vehicle speed information in the first time period;
preprocessing the steering wheel angle information of the vehicle in the first time period and the vehicle speed information in the first time period to obtain a sample coordinate point in the first time period;
acquiring a central point of a first driving scene, a central point of a second driving scene and a central point of a third driving scene according to the sample coordinate point of the first time period;
determining sample coordinate points of a second time period from the sample coordinate points of the first time period; the first time period comprises the second time period;
classifying the sample coordinate points of the second time period based on the central point of the first driving scene, the central point of the second driving scene and the central point of the third driving scene to obtain a classification result; the classification result comprises a target sample coordinate point of a first driving scene and a target sample coordinate point of a second driving scene;
judging whether the target sample coordinate point of the first driving scene is larger than the target sample coordinate point of the second driving scene;
if so, acquiring a driving scene recognition result of the vehicle; the driving scene recognition result of the vehicle is a first driving scene;
if not, acquiring a driving scene recognition result of the vehicle; and the driving scene recognition result of the vehicle is a second driving scene.
Optionally, the obtaining a center point of a first driving scene, a center point of a second driving scene, and a center point of a third driving scene according to the sample coordinate point of the first time period includes:
determining an initial center point of a first driving scene, an initial center point of a second driving scene and an initial center point of a third driving scene;
classifying the sample coordinate points of the first time period into sample coordinate points of a first driving scene, sample coordinate points of a second driving scene and sample coordinate points of a third driving scene based on an initial center point of the first driving scene, an initial center point of the second driving scene and an initial center point of the third driving scene;
and respectively updating the initial central point of the first driving scene, the initial central point of the second driving scene and the initial central point of the third driving scene according to the sample coordinate point of the first driving scene, the sample coordinate point of the second driving scene and the sample coordinate point of the third driving scene, and acquiring the central point of the first driving scene, the central point of the second driving scene and the central point of the third driving scene.
Optionally, the determining, according to the driving scenario recognition result and the floor recognition result, a floor where an actual target parking lot entrance of the vehicle is located includes:
obtaining the floor where the vehicle is located when the vehicle runs for the first distance from the floor identification result; the first distance is greater than a first preset distance;
acquiring a driving scene when the vehicle runs a first distance from the driving scene recognition result;
when the driving scene of the vehicle running for the first distance is the first driving scene, determining that the floor where the vehicle runs for the first distance is the floor where the first prediction target parking lot entrance is located;
acquiring a driving scene when the vehicle runs a second distance from the driving scene recognition result;
when the driving scene of the vehicle running for the second distance is the first driving scene, obtaining the floor where the vehicle runs for the second distance from the floor recognition result; the second distance is greater than a second preset distance; the second distance is a distance traveled by the vehicle after the vehicle traveled the first distance;
determining first floor change direction data according to the floor where the vehicle is located when the vehicle runs for the first distance and the floor where the vehicle is located when the vehicle runs for the second distance;
acquiring change direction data of a second floor; the second floor change direction data is change direction data of a floor where the vehicle is located when the vehicle runs within a third distance;
when the data which is the same as the first floor change direction data exists in the second floor change direction data, determining the floor where a second predicted target parking lot entrance is located from the data which is in the same direction as the first floor change direction data, and determining the floor where a second preselected target parking lot entrance is located as the floor where an actual target parking lot entrance is located;
and when the data in the same direction as the change direction of the first floor does not exist in the second floor change direction data, determining the floor where the first predicted target parking lot entrance is located as the floor where the actual target parking lot entrance is located.
Optionally, the obtaining a floor recognition result of the vehicle includes:
when the driving scene recognition result is a first driving scene, acquiring a first floor recognition result of the vehicle;
when the driving scene recognition result is a second driving scene, acquiring a second floor recognition result of the vehicle; the second floor recognition result is 1;
acquiring a floor recognition result of the vehicle; the floor recognition result includes the first floor recognition result and the second floor recognition result.
Optionally, when the driving scenario identification result is a first driving scenario, acquiring a first floor identification result of the vehicle, including:
when the driving scene recognition result is a first driving scene, calculating a road gradient corresponding to a fourth distance when the vehicle runs for the fourth distance; the fourth distance is greater than the first preset distance;
when the slope corresponding to the fourth distance is larger than a first preset threshold value, recording the height of the vehicle within the fourth distance of the vehicle;
acquiring a fifth distance, wherein the fifth distance is the distance traveled by the vehicle after the vehicle travels a fourth distance;
determining a road gradient at which the vehicle travels a fifth distance by calculating a vehicle height change value per unit time;
when the road gradient of the vehicle running for a fifth distance is smaller than a second preset threshold value and the fifth distance is larger than the second preset distance, recording the height of the vehicle running for the fifth distance;
calculating a height difference between a vehicle height when the vehicle travels a fourth distance and a vehicle height when the vehicle travels a fifth distance;
when the height difference is within a preset range, determining the number of newly added floors according to the height difference;
according to the floor where the vehicle runs for the fourth distance and the number of the newly added floors, the floor where the vehicle runs for the fifth distance is obtained; and the floor where the vehicle is located when the vehicle runs the fifth distance is the first floor identification result when the vehicle runs.
Optionally, when the driving scene recognition results before the vehicle travels the fourth distance are all the second driving scenes, the floor where the vehicle is located when the vehicle travels the fourth distance is 1.
Optionally, the method further includes:
and transmitting the parking information of the vehicle to a data background for storage, and displaying the parking information of the vehicle on a user terminal based on the data background.
The embodiment of the present application further provides a vehicle parking position recognition apparatus, the apparatus includes:
a first acquisition unit configured to acquire a driving scene recognition result of a vehicle; the driving scene recognition result comprises a first driving scene and a second driving scene;
the second acquisition unit is used for acquiring a floor identification result of the vehicle;
the first determining unit is used for determining the floor where the actual target parking lot entrance of the vehicle is located according to the driving scene recognition result and the floor recognition result;
a third acquisition unit configured to acquire longitude and latitude information of the vehicle; the longitude and latitude information of the vehicle comprises parking longitude and latitude information of the vehicle;
the second determining unit is used for determining the track data before the vehicle stops according to the floor where the actual target parking lot entrance of the vehicle is located, the floor recognition result when the vehicle runs and the longitude and latitude information of the vehicle;
a third determination unit for determining parking information of the vehicle; the parking information of the vehicle comprises a parking floor of the vehicle, parking longitude and latitude information of the vehicle and track data before the vehicle parks; and the parking floor of the vehicle is obtained according to the floor recognition result of the vehicle.
According to the technical scheme, the method has the following beneficial effects:
the embodiment of the application provides a method and a device for identifying a vehicle parking position, wherein the method comprises the following steps: and acquiring a driving scene recognition result of the vehicle. The driving scene recognition result comprises a first driving scene and a second driving scene. And acquiring a floor identification result of the vehicle. And determining the floor where the actual target parking lot entrance of the vehicle is located according to the driving scene recognition result and the floor recognition result. And acquiring longitude and latitude information of the vehicle, wherein the longitude and latitude information of the vehicle comprises the parking longitude and latitude information of the vehicle. And determining track data before the vehicle stops according to the floor where the actual target parking lot entrance of the vehicle is located, the floor recognition result when the vehicle runs and the longitude and latitude information of the vehicle. Parking position information of the vehicle is determined. The parking position information of the vehicle comprises a parking floor of the vehicle, parking longitude and latitude information of the vehicle and track data before the vehicle parks, wherein the parking floor of the vehicle is obtained according to a floor recognition result of the vehicle. By the method, the parking floor of the vehicle, the parking longitude and latitude information of the vehicle and the track data before the vehicle is parked can be obtained. The parking position of the vehicle in the multi-layer parking lot can be accurately known by using the acquired parking floor of the vehicle parking and the parking longitude and latitude information of the vehicle, so that the accurate parking position positioning is realized. By utilizing the obtained track data before the vehicle stops, the user can accurately and quickly find the vehicle, and the vehicle searching efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic diagram of an exemplary application scenario of a vehicle parking position identification method according to an embodiment of the present application;
fig. 2 is a flowchart of a method for identifying a parking position of a vehicle according to an embodiment of the present disclosure;
fig. 3 is a flowchart for acquiring a driving scene recognition result of a vehicle according to an embodiment of the present disclosure;
fig. 4 is a schematic view illustrating a driving scene recognition provided in an embodiment of the present application;
fig. 5 is a flowchart for obtaining a first floor identification result of a vehicle according to an embodiment of the present disclosure;
fig. 6 is a schematic application diagram of obtaining a first floor identification result of a vehicle according to an embodiment of the present application;
fig. 7 is a flowchart of determining a floor of an actual target parking lot entrance of a vehicle according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating a method for acquiring real-time GPS information of a vehicle according to an embodiment of the present disclosure;
fig. 9 is a schematic view of a vehicle parking position recognition apparatus according to an embodiment of the present application.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, embodiments accompanying figures and detailed description thereof are described in further detail below.
In order to facilitate understanding of the vehicle parking position identification method provided in the embodiment of the present application, an application scenario of the embodiment of the present application is described below with reference to fig. 1. Referring to fig. 1, fig. 1 is a schematic view of an exemplary application scenario of a vehicle parking position identification method provided in an embodiment of the present application. The vehicle parking position identification method provided by the embodiment of the application can be applied to the positioning module 101 of the vehicle-mounted terminal.
The positioning module 101 includes at least a driving scenario identification module 1011, a floor identification module 1012, a parking lot identification module 1013, and a fusion positioning module 1014. The driving scene recognition module 1011 is configured to obtain a driving scene recognition result of the vehicle. The driving scene recognition result comprises a first driving scene and a second driving scene. The floor recognition module 1012 is used for acquiring a floor recognition result of the vehicle in real time. The parking lot identification module 1013 determines the floor where the actual target parking lot entrance of the vehicle is located according to the driving scenario identification result and the floor identification result. The fusion positioning module 1014 is configured to obtain longitude and latitude information of the vehicle in real time, where the longitude and latitude information of the vehicle includes parking longitude and latitude information of the vehicle. The positioning module 101 determines trajectory data of the vehicle before parking according to the floor where the actual target parking lot entrance of the vehicle is located, the floor recognition result when the vehicle runs, and the longitude and latitude information of the vehicle. The positioning module 101 determines parking position information of the vehicle. The parking position information of the vehicle comprises a parking floor of the vehicle, parking longitude and latitude information of the vehicle and track data before the vehicle parks, wherein the parking floor of the vehicle is obtained according to a floor recognition result of the vehicle.
The positioning module 101 transmits parking information of the vehicle to the communication module 102 in the in-vehicle terminal. The communication module 102 transmits the parking information of the vehicle to the data background 103, so that the data background 103 stores the parking information of the relevant vehicle. In particular, if the vehicle is in a scene with no network signal or no data uploading capability, the communication module 102 can store the data and upload the data to the data background 103 when the network signal is recovered.
After the user parks, the user can check the parking position information of the vehicle and the track data before parking through the mobile phone terminal 104 to quickly find the vehicle. The parking position information of the vehicle includes floor information, latitude and longitude information, and the like in the multi-floor parking lot.
It can be understood that, besides the parking information of the vehicle, the positioning module 101 also transmits the real-time vehicle position information, the driving scene recognition result of the vehicle, the floor of the parking lot entrance, and other information during the vehicle driving process to the data background 103 in real time. The vehicle position information includes longitude and latitude information and height information of the vehicle.
Those skilled in the art will appreciate that the schematic diagram shown in fig. 1 is merely one example in which embodiments of the present application may be implemented and that the scope of applicability of embodiments of the present application is not limited in any way by this framework.
Referring to fig. 2, fig. 2 is a flowchart of a method for identifying a parking position of a vehicle according to an embodiment of the present application, where the method is applied to a positioning module 101 of a vehicle-mounted terminal. As shown in fig. 2, the method includes S201-S206:
s201: acquiring a driving scene recognition result of a vehicle; the driving scenario recognition result includes a first driving scenario and a second driving scenario.
In the driving process of the vehicle, the driving scene recognition module in the positioning module can recognize the driving scene of the vehicle in real time and generate a driving scene recognition result of the vehicle. The driving scene recognition result comprises a first driving scene and a second driving scene, specifically, the first driving scene is a parking lot scene, and the second driving scene is a non-parking lot scene.
In specific implementation, referring to fig. 3, fig. 3 is a flowchart for acquiring a driving scene recognition result of a vehicle according to an embodiment of the present application. As shown in fig. 3, acquiring a driving scene recognition result of a vehicle includes:
s301: steering wheel angle information of a vehicle for a first time period of the vehicle and vehicle speed information for the first time period are obtained.
The method comprises the steps of acquiring and caching steering wheel angle information of a vehicle in a first time period and vehicle speed information in the first time period during the running process of the vehicle. As an example, the first time period is the latest time period during which the vehicle is traveling, such as 5 minutes. It is understood that the steering wheel angle information and the vehicle speed information of the vehicle during the driving of the vehicle may reflect the current driving scene of the vehicle. For example, when the steering wheel angle of the vehicle is large and the vehicle speed is low, the current driving Yangtze river of the vehicle can be considered as the first driving scene, namely the parking lot scene.
Specifically, the steering wheel angle information of the vehicle is the direction of the vehicleA specific value of the steering angle. The vehicle speed information is a specific value of the vehicle speed. Steering wheel angle delta of vehicle sw The vehicle speed is indicated by v.
S302: and preprocessing the steering wheel angle information of the vehicle in the first time period and the vehicle speed information in the first time period to obtain a sample coordinate point in the first time period.
After the steering wheel angle information of the vehicle in the first time period and the vehicle speed information in the first time period are cached, preprocessing is carried out on the steering wheel angle information and the vehicle speed information, and a sample coordinate point of the first time period is obtained.
As an example, the preprocessing is to re-express data, that is, a coordinate point made up of a steering wheel angle and a vehicle speed of the vehicle for a first time period is re-expressed by a direction and a length of a vector. The vector is a vector formed by a coordinate point formed by a steering wheel angle of the vehicle and a vehicle speed and a coordinate origin. The direction of the vector is represented by the cosine of the direction angle of the vector, and the length of the vector is represented by the ratio of the length of the vector to the maximum length of the direction of the vector.
And obtaining a sample coordinate point of the first time period after expressing the coordinate point formed by the steering wheel angle and the vehicle speed of the vehicle of the first time period again. It will be appreciated that the sample coordinate points are represented by the direction and length of the vector.
For example, referring to fig. 4, fig. 4 is a schematic view of driving scene recognition. The horizontal axis of the left coordinate system in fig. 4 is the vehicle speed v and the vertical axis is the steering wheel angle δ of the vehicle sw The coordinates in the coordinates are coordinate points formed by the steering wheel angle of the vehicle and the vehicle speed. The i-curve in the left coordinate system of fig. 4 represents the maximum length for a given direction, i.e. the maximum length for the direction of the vector. The coordinate system on the right side of fig. 4 is a coordinate system in which coordinate points constituted by the steering angle of the vehicle and the vehicle speed are re-expressed, and the horizontal axis in the coordinate system is the direction of the vector after re-expression and is represented by cos (α), where α is the direction angle of the vector. The vertical axis of the coordinate system is the length of the vector after re-expression, in p k Where p is the maximum length of the vector direction, p i The length of the k-th coordinate point vector in the left coordinate system of fig. 4 is shown.
It should be noted that the maximum length given in FIG. 4 in a given direction may be determined by the maximum lateral acceleration (typically less than 4 m/s) during routine vehicle driving 2 ) The determination was made as shown in the following formula. Wherein ii is the steering system angle gear ratio, a y_max For a given maximum lateral acceleration, L is the wheelbase, v max To maximum speed, δ sw_max Maximum steering wheel angle for vehicle:
Figure BDA0002837785190000091
it can be understood that the preprocessing is used for unifying data ranges of the steering wheel angle information and the vehicle speed information of the vehicle, so that the subsequent analysis is facilitated and the driving scene recognition result of the vehicle is obtained.
S303: and acquiring a central point of a first driving scene, a central point of a second driving scene and a central point of a third driving scene according to the sample coordinate point of the first time period.
After the sample coordinate points of the first time period are obtained, determining a central point of the first driving scene, a central point of the second driving scene and a central point of the third driving scene according to the obtained sample coordinate points of the first time period.
When the method is specifically implemented, the central point of the first driving scene, the central point of the second driving scene and the central point of the third driving scene are obtained according to the sample coordinate point of the first time period, and the method comprises the following steps:
determining an initial center point of a first driving scene, an initial center point of a second driving scene and an initial center point of a third driving scene;
classifying the sample coordinate points of the first time period into sample coordinate points of the first driving scene, sample coordinate points of the second driving scene and sample coordinate points of the third driving scene based on an initial center point of the first driving scene, an initial center point of the second driving scene and an initial center point of the third driving scene;
and respectively updating the initial central point of the first driving scene, the initial central point of the second driving scene and the initial central point of the third driving scene according to the sample coordinate point of the first driving scene, the sample coordinate point of the second driving scene and the sample coordinate point of the third driving scene, and acquiring the central point of the first driving scene, the central point of the second driving scene and the central point of the third driving scene.
It should be noted that when the initial center point of the first driving scene, the initial center point of the second driving scene, and the initial center point of the third driving scene are determined, the initial center point of the first driving scene, the initial center point of the second driving scene, and the initial center point of the third driving scene are randomly selected. Referring to fig. 4, the first driving scenario is a parking scenario, denoted by P1. The second driving scenario is a non-parking lot scenario, denoted by P2, which may represent driving behavior of the vehicle while traveling on a normal road. A third driving scenario, which may be present in both parking lots and open road driving, is a fuzzy zone, denoted by P3. The center point of the first driving scenario, the center point of the second driving scenario, and the center point of the third driving scenario are denoted by C1, C2, and C3, respectively.
Wherein the sample coordinate points of the first time period are classified into the sample coordinate points of the first driving scene, the sample coordinate points of the second driving scene, and the sample coordinate points of the third driving scene based on the initial center point of the first driving scene, the initial center point of the second driving scene, and the initial center point of the third driving scene. Specifically, the sample coordinate points of the first time period are classified by using the euclidean distance. In detail, the Euclidean distance from each sample coordinate point to the initial central point is calculated, and the sample coordinate points are divided into categories corresponding to the classification centers with the minimum distances.
In addition, the initial central point of the first driving scene, the initial central point of the second driving scene and the initial central point of the third driving scene are respectively updated according to the sample coordinate point of the first driving scene, the sample coordinate point of the second driving scene and the sample coordinate point of the third driving scene, and the updated central point of the first driving scene, the updated central point of the second driving scene and the updated central point of the third driving scene are obtained.
Specifically, according to the latest classification of the sample coordinate points in the first time period, the mean value of all the coordinate points in each class is taken as a new classification center, as shown in the following formula:
Figure BDA0002837785190000101
wherein (X) Ci ,Y Ci ) I represents the driving scene and takes values of 1, 2 and 3 as a new classification center of the driving scene. X Ci Abscissa, Y, representing the classification center of the ith driving scenario Ci The ordinate representing the classification center of the i-th driving scene. N is a radical of i The number of sample coordinate points in the ith driving scene is represented, and j represents the jth sample coordinate point in the ith driving scene. X i_j Abscissa, Y, representing the jth sample coordinate point in the ith driving scenario i_j And the ordinate of the jth sample coordinate point in the ith driving scene is represented.
It should be noted that, after determining the new classification center of each driving scene, the center point of the first driving scene, the center point of the second driving scene, and the center point of the third driving scene are determined. Then, as the vehicle travels, the steering wheel angle information of the vehicle in the new first time period and the vehicle speed information in the new first time period are cached, and then the classification centers of the driving scenes are required to be changed again. Namely, the classification centers of all driving scenes need to be continuously updated in real time, and the classification centers are updated in real time for different drivers and different scenes.
S304: determining sample coordinate points of a second time period from the sample coordinate points of the first time period; the first time period includes a second time period.
When a central point of a first driving scene, a central point of a second driving scene and a central point of a third driving scene are obtained according to the sample coordinate points of the first time period, determining the sample coordinate points of the second time period from the sample coordinate points of the first time period; the first time period includes a second time period. For example, the second time period is the latest 1 minute within the first time period.
S305: classifying the sample coordinate points of the second time period based on the central point of the first driving scene, the central point of the second driving scene and the central point of the third driving scene to obtain a classification result; the classification result includes a target sample coordinate point of the first driving scenario and a target sample coordinate point of the second driving scenario.
And classifying the sample coordinate points of the second time period based on the obtained central point of the first driving scene, the central point of the second driving scene and the central point of the third driving scene to obtain a classification result. The classification result is specifically a target sample coordinate point of the first driving scene and a target sample coordinate point of the second driving scene. Since the third driving scenario does not have the ability to distinguish, no determination is made.
S306: and judging whether the target sample coordinate point of the first driving scene is larger than the target sample coordinate point of the second driving scene.
And judging whether the obtained target sample coordinate point of the first driving scene in the second time period is larger than the target sample coordinate point of the second driving scene.
S307: and if so, acquiring a driving scene recognition result of the vehicle, wherein the driving scene recognition result of the vehicle is a first driving scene.
And when the obtained target sample coordinate points of the first driving scene in the second time period are larger than the target sample coordinate points of the second driving scene, namely the number of the target sample coordinate points in the P1 class is larger than that of the target sample coordinate points in the P2 class, determining that the driving scene recognition result of the current vehicle is the first driving scene, namely the parking lot scene.
S308: if not, acquiring a driving scene recognition result of the vehicle, wherein the driving scene recognition result of the vehicle is a second driving scene.
And when the obtained target sample coordinate points of the first driving scene in the second time period are not larger than the target sample coordinate points of the second driving scene, namely the number of the target sample coordinate points in the P1 class is not larger than the number of the target sample coordinate points in the P2 class, determining that the driving scene recognition result of the current vehicle is the second driving scene, namely the non-parking lot scene.
By the method, the driving scene recognition module in the positioning module can be used for recognizing the driving scene of the vehicle in real time and generating the driving scene recognition result of the vehicle.
S202: and acquiring a floor identification result of the vehicle.
The floor recognition module in the positioning module recognizes the floor of the vehicle in real time and generates a floor recognition result of the vehicle while the driving scene recognition module in the positioning module recognizes the driving scene of the vehicle in real time. It should be noted that the driving scenario recognition module transmits the driving scenario recognition result of the vehicle to the floor recognition module of the vehicle.
In specific implementation, the method for acquiring the floor recognition result of the vehicle comprises the following steps:
when the driving scene recognition result is a first driving scene, acquiring a first floor recognition result of the vehicle;
when the driving scene recognition result is a second driving scene, acquiring a second floor recognition result of the vehicle; the second floor recognition result is 1;
acquiring a floor recognition result of the vehicle; the floor recognition result includes a first floor recognition result and a second floor recognition result.
It is understood that when the driving scenario recognition result is the first driving scenario, which indicates that the vehicle has entered a parking lot, such as a multi-floor parking lot, the first floor recognition result of the vehicle is acquired. When the driving scenario recognition result is the second driving scenario, it indicates that the vehicle has not entered a parking lot, such as a multi-floor parking lot. At this time, the floor recognition result in the floor recognition module is the second floor recognition result, and the second floor recognition result is1, which indicates that the number of floors where the vehicle is located is not increased.
Specifically, referring to fig. 5, fig. 5 is a flowchart for obtaining a first floor identification result of a vehicle according to an embodiment of the present disclosure. As shown in fig. 5, when the driving scenario recognition result is the first driving scenario, acquiring a first floor recognition result of the vehicle includes:
s501: when the driving scene recognition result is the first driving scene, calculating a road gradient corresponding to a fourth distance when the vehicle runs the fourth distance; the fourth distance is greater than the first preset distance.
And when the driving scene recognition result is the first driving scene, namely the parking lot scene, indicating that the user needs to perform parking operation. At the moment, when the fourth distance is calculated, the road gradient corresponding to the fourth distance is calculated; the fourth distance is greater than the first preset distance. It should be noted that the fourth distance is a distance in the driving process of the vehicle, and the fourth distance is selected according to the actual situation. The first preset distance is selected according to actual conditions, and the first preset distance is not limited.
The positioning module acquires Global Positioning System (GPS) information of a vehicle in real time. The GPS information includes latitude and longitude information and altitude information of the vehicle. And calculating the road slope corresponding to the fourth distance, specifically, calculating the height difference generated by the vehicle running the fourth distance by using the height information in the GPS information acquired in real time, and taking the height difference as the road slope corresponding to the fourth distance.
S502: and when the slope corresponding to the fourth distance is larger than the first preset threshold value, recording the height of the vehicle running within the fourth distance.
And when the gradient corresponding to the obtained fourth distance is larger than the first preset threshold value, recording the height of the vehicle within the fourth distance of the vehicle. The vehicle height within the fourth distance traveled by the vehicle is the vehicle's height increment within the fourth distance. It should be noted that the first preset threshold is selected according to an actual situation, for example, according to a specific situation of the parking lot, where the first preset threshold is not limited.
It should be noted that when the slope corresponding to the fourth distance is greater than the first preset threshold, it is described that the ramp starting point is included in the fourth distance.
And recording the height of the vehicle when the vehicle runs the fourth distance, specifically, the height of the vehicle when the vehicle runs the fourth distance can be obtained through the height information in the real-time recorded GPS information. The position of the vehicle when the vehicle travels to the fourth distance is used as the ramp starting point, and the ramp starting point can be represented by the GPS information at the position.
It should be noted that, when the slope corresponding to the fourth distance is not greater than the first preset threshold, the fourth distance is obtained again, and whether the slope corresponding to the fourth distance is greater than the first preset threshold when the vehicle travels the fourth distance is determined until the starting point of the slope is found.
S503: and acquiring a fifth distance, wherein the fifth distance is the distance traveled by the vehicle after the vehicle travels the fourth distance.
Acquiring a fifth distance, wherein the fifth distance is the distance traveled by the vehicle after the vehicle travels the fourth distance; the fifth distance is greater than the second preset distance. It should be noted that the fifth distance is selected according to actual situations, and the fifth distance is not limited here.
S504: the road gradient at which the vehicle travels the fifth distance is determined by calculating a vehicle height change value per unit time.
After the fifth distance is acquired, the road gradient at which the vehicle travels the fifth distance is determined by calculating the vehicle height change value per unit time.
S505: and when the road gradient when the vehicle runs the fifth distance is smaller than a second preset threshold value and the fifth distance is larger than the second preset distance, recording the height of the vehicle when the vehicle runs the fifth distance.
And when the road gradient when the vehicle travels the fifth distance is smaller than a second preset threshold value, recording the height of the vehicle when the vehicle travels the fifth distance. It should be noted that the second preset threshold is selected according to actual situations, and the second preset threshold is not limited here. For example, the second predetermined threshold is selected to be 1 degree.
And when the road gradient when the vehicle travels the fifth distance is less than a second preset threshold value, indicating that the vehicle enters the horizontal road section, and recording the position of the vehicle when the vehicle travels the fifth distance as a gradient end point. The grade end point may be embodied by GPS information at that location.
It should be noted that, when the gradient of the road when the vehicle travels the fifth distance is not less than the second preset threshold, the fifth distance is obtained again, and whether the corresponding gradient is greater than the second preset threshold when the vehicle travels the fifth distance is determined until the ramp destination is found.
S506: a height difference between the vehicle height when the vehicle travels the fourth distance and the vehicle height when the vehicle travels the fifth distance is calculated.
When the vehicle height at which the vehicle travels the fourth distance and the vehicle height at which the vehicle travels the fifth distance are acquired, a height difference between the vehicle height at which the vehicle travels the fourth distance and the vehicle height at which the vehicle travels the fifth distance is calculated.
S507: and when the height difference is within the preset range, determining the number of newly added floors according to the height difference.
And when the calculated height difference is within the preset range, determining the number of newly added floors according to the height difference.
It should be noted that the preset range is selected according to actual situations, and the preset range is not limited here. When the calculated height difference is within the preset range, it is considered that the vehicle has passed through one floor of the parking lot. Otherwise, the vehicle is considered to be in normal uphill and downhill, and at the moment, the floor calculation is not carried out.
And when the height difference is within the preset range, calculating the number of newly added floors according to the obtained height difference, specifically, dividing the height difference by the height of the floors to obtain a calculation result, and then carrying out rounding operation on the calculation result to obtain the number of newly added floors. Wherein the floor height is selected according to the actual situation, for example 5 meters.
S508: acquiring the floor where the vehicle runs for the fifth distance according to the floor where the vehicle runs for the fourth distance and the number of newly added floors; and the floor where the vehicle is located when the vehicle runs the fifth distance is the first floor identification result when the vehicle runs.
And after determining the number of newly added floors when the vehicle travels to the fifth distance, acquiring the floor where the vehicle is located when the vehicle travels to the fourth distance. The floor where the vehicle is located when the vehicle travels the fifth distance can be known according to the floor where the vehicle is located when the vehicle travels the fourth distance and the newly added floor number when the vehicle travels the fifth distance. And the floor where the vehicle is located when the vehicle runs the fifth distance is the first floor identification result when the vehicle runs.
It should be noted that, when the driving scenario recognition results before the vehicle travels the fourth distance are both the second driving scenario, the floor where the vehicle is located when the vehicle travels the fourth distance is 1. That is, when the driving scene recognition result before the vehicle travels the fourth distance is the non-parking lot scene, the floor where the vehicle is located when the vehicle travels the fourth distance is1, and the floor where the vehicle is located when the vehicle travels the fifth distance is the number of the newly added floors plus 1.
In order to facilitate understanding of the process of obtaining the first floor identification result of the vehicle, the embodiment of the present application further provides an application schematic diagram of obtaining the first floor identification result of the vehicle, see fig. 6. The process of determining the first floor recognition result is divided into 3 steps: step1 slope starting point identification, step2 slope terminal point identification and Step3 floor calculation.
The process of determining the first floor identification result is applied to the floor identification module. Latitude and longitude information (X, Y) of the vehicle, height information H of the vehicle, and real-time speed information Vx of the vehicle (equivalent to the vehicle speed v described above) are input. When the floor information is not recognized, the Flag is 0.
Step1 slope starting point identification: during the running of the vehicle, the speed Vx is not 0. When the fourth distance Delta _ Dis1 is greater than the first preset distance Dis _ thd1 in the running process of the vehicle, the vehicle height Delta _ H corresponding to the fourth distance Delta _ Dis1 is judged. When the vehicle height Delta _ H is greater than the first preset threshold value H _ thd, the segment includes the slope starting point. The plane coordinates (X1, Y1) and the height H1 of the vehicle at this time are recorded as the starting point of the slope, and the process Flag is set to 1 to enter Step2, and Delta _ Dis1 and Delta _ H are reset at the same time. Otherwise, resetting Delta _ Dis1 and Delta _ H, and judging whether the new fourth distance contains the ramp starting point.
Step2, slope terminal identification: after finding the ramp start point (Flag = 1), in the case where the vehicle speed Vx is not 0, the vehicle continues to travel a distance, which is a fifth distance. When the vehicle travels to a fifth distance, the gradient is judged according to the increment of the height in unit time, when the gradient is smaller than a second preset threshold value, for example, 1 degree (the value is not limited), and the fifth distance Delta _ Dis2 is larger than Dis _ thd2, the vehicle enters a horizontal road section, the height H2 of the vehicle at the moment is recorded as the end point of the ramp, and a process Flag is set to be 2 so as to enter Step3. Otherwise, reset Delta _ Dis2 and repeat Step2 until the ramp end is found.
Step3 floor calculation: when the height difference between the starting point and the ending point of the ramp is within flr _ min and flr _ max, the floor of the parking lot is considered. Otherwise it may be a daily normal ascent or descent without calculating the floor. It is understood that the ranges within flr _ min and flr _ max are preset ranges. And when the height difference is within the preset range, directly performing division rounding and accumulation on the basis of the Floor where the vehicle is located when the vehicle runs for the fourth distance according to the height of 5m layers. At this time, the reset Flag is 0. And outputting the first Floor recognition result Floor and longitude and latitude information (X, Y) of the vehicle.
S203: and determining the floor where the actual target parking lot entrance of the vehicle is located according to the driving scene recognition result and the floor recognition result.
The driving scene recognition module of the vehicle obtains the driving scene of the vehicle in real time, and the floor recognition module obtains the floor of the vehicle in real time. And determining the floor where the actual target parking lot entrance of the vehicle is located according to the driving scene recognition result and the floor recognition result.
Specifically, referring to fig. 7, fig. 7 is a flowchart for determining a floor where an actual target parking lot entrance of a vehicle is located according to an embodiment of the present application. As shown in fig. 7, determining the floor where the actual target parking lot entrance of the vehicle is located according to the driving scenario recognition result and the floor recognition result includes:
s701: obtaining the floor where the vehicle is located when the vehicle runs for a first distance from the floor identification result; the first distance is greater than a first preset distance.
And obtaining the floor where the vehicle is located when the vehicle runs for the first distance from the floor identification result obtained in real time. The first distance is greater than a first preset distance. It should be noted that the first distance is a distance during the driving process of the vehicle, and may be selected according to the actual situation, for example, 100 meters.
S702: and acquiring the driving scene when the vehicle runs for the first distance from the driving scene recognition result.
And acquiring the driving scene when the vehicle runs for the first distance from the driving scene identification result acquired in real time.
S703: and when the driving scene when the vehicle runs for the first distance is the first driving scene, determining that the floor where the vehicle runs for the first distance is the floor where the first prediction target parking lot entrance is located.
S704: and acquiring the driving scene when the vehicle runs the second distance from the driving scene recognition result.
And acquiring a driving scene when the vehicle travels a second distance from the driving scene identification result acquired in real time. It should be noted that the first distance is selected according to actual situations.
S705: when the driving scene when the vehicle runs for the second distance is the first driving scene, obtaining the floor where the vehicle runs for the second distance from the floor recognition result; the second distance is greater than a second preset distance; the second distance is the distance traveled by the vehicle after the vehicle traveled the first distance.
And when the driving scene when the vehicle runs the second distance is the first driving scene, acquiring the floor where the vehicle runs the second distance from the floor recognition result. At this time, it is described that the vehicle is always in the first driving scene, that is, the parking lot scene, and at this time, the situation that the driving scene recognition result of the vehicle is the first driving scene due to some errors is avoided.
When the driving scene when the vehicle travels the second distance is not the first driving scene, S701 is re-executed.
S706: and determining the change direction data of the first floor according to the floor where the vehicle is located when the vehicle runs the first distance and the floor where the vehicle is located when the vehicle runs the second distance.
And after the floor where the vehicle is located when the vehicle runs for the first distance and the floor where the vehicle is located when the vehicle runs for the second distance are obtained, determining the change direction data of the first floor by utilizing the two floor numbers. Specifically, it = Floor-Floor0 to set Floor dir = Floor, where Floor dir is the first Floor change direction data, floor is the Floor where the vehicle is located when the vehicle travels the second distance, and Floor0 is the Floor where the vehicle is located when the vehicle travels the first distance. When FloorDir is positive, the first floor change direction is determined to be an increasing direction, and when FloorDir is negative, the first floor change direction is determined to be a decreasing direction.
S707: acquiring change direction data of a second floor; the second floor change direction data is change direction data of a floor where the vehicle is located when the vehicle travels within a third distance.
And acquiring second floor change direction data, wherein the second floor change direction data is the change direction data of the floor where the vehicle is located when the vehicle runs within the third distance.
Note that the third distance is a distance during the vehicle traveling before the first driving scenario is recognized in S703. The second floor change direction data is obtained to determine whether the first driving scenario identified in S703 is accurate, that is, whether the first driving scenario identified in S703 is accurate is determined by determining whether there are other floors before the floor where the first prediction target parking lot entrance is located.
S708: and when the data in the second floor change direction is the same as the data in the first floor change direction, determining the floor where the second predicted target parking lot entrance is located from the data in the same direction as the data in the first floor change direction, and determining the floor where the second preselected target parking lot entrance is located as the floor where the actual target parking lot entrance is located.
And when the data in the second floor change direction is the same as the data in the first floor change direction, determining the floor where the second prediction target parking lot entrance is located from the data in the same direction as the data in the first floor change direction. As an example, if the first floor change direction data FloorDir is positive, if the second floor change direction data has positive FloorDir, it is considered that the second floor change direction data has the same data as the first floor change direction data. At this time, the Floor information corresponding to the Floor change dir data, that is, the Floor and the Floor0, are obtained from the Floor change dir data, which is data in the same direction as the first Floor change direction data, that is, the Floor change dir in which the Floor change direction is the positive direction in the second Floor change direction data. The Floor0 here is determined as the Floor where the second predicted target parking lot entrance is located, and the Floor where the second preselected target parking lot entrance is determined as the Floor where the actual target parking lot entrance is located. Through this process, it is explained that there is a more forward parking lot entrance.
S709: and when the data in the same direction as the change direction of the first floor does not exist in the second floor change direction data, determining the floor where the first predicted target parking lot entrance is located as the floor where the actual target parking lot entrance is located.
And when the data in the same direction as the change direction of the first floor does not exist in the second floor change direction data, the floor where the first predicted target parking lot entrance is located is the floor where the actual target parking lot entrance is located. And determining the floor where the first prediction target parking lot entrance is located as the floor where the actual target parking lot entrance is located.
S204: acquiring longitude and latitude information of a vehicle; the latitude and longitude information of the vehicle includes parking latitude and longitude information of the vehicle.
And acquiring the longitude and latitude information of the vehicle in real time through the fusion positioning module, wherein the longitude and latitude information of the vehicle comprises the parking longitude and latitude information of the vehicle when the vehicle parks. Specifically, the longitude and latitude information of the vehicle is obtained from the GPS information acquired in real time.
It should be noted that the GPS information obtained in real time by the fusion positioning module is corrected GPS information. When the GPS signal is weak or the GPS signal has no signal, the position information when no signal or the signal is weak is estimated by a flight path estimation method so as to correct the position information of the GPS. Referring to fig. 8, fig. 8 is a schematic diagram of acquiring real-time GPS information of a vehicle according to an embodiment of the present application, where the acquiring of the real-time GPS information of the vehicle is implemented by a fusion positioning module.
As shown in fig. 8, the input signals of the fusion positioning module include a GPS signal transmitted by a vehicle-mounted terminal (Telematics BOX, T-BOX), a wheel Speed, a yaw rate, and an acceleration transmitted by a Speed Control System (SCS), and a Steering wheel Angle transmitted by a Steering wheel Angle Sensor (SAS); the output signal is the positioning information output. The method includes the steps of sending a GPS signal in an input signal to an auxiliary positioning module in a fusion positioning module, and specifically, judging a three-dimensional Position precision factor (PDOP) threshold value of the GPS signal in the input signal. When the output result meets the PDOP threshold condition, the GPS information is directly output, namely the output signal is the GPS information meeting the PDOP threshold condition. And when the output result does not meet the PDOP threshold condition, determining initial position information and positioning information at the K-1 moment from the GPS information. And determines the linear acceleration and yaw rate from the wheel speed, yaw rate, acceleration, and steering wheel angle. And carrying out dead reckoning by using the initial position information, the K-1 moment positioning information, the linear acceleration and the yaw rate to obtain corrected GPS information, and outputting the corrected GPS information as positioning information. The initial position information is a time when the GPS is not valid, and the current time is K, and the time K-1 is a time when the GPS loses a signal.
S205: and determining track data before the vehicle stops according to the floor where the actual target parking lot entrance of the vehicle is located, the floor recognition result when the vehicle runs and the longitude and latitude information of the vehicle.
After the floor where the actual target parking lot entrance of the vehicle is located, the floor recognition result when the vehicle runs, and the latitude and longitude information of the vehicle are obtained, trajectory data before the vehicle stops can be determined. The trajectory data before the vehicle is parked includes the floor on which the actual target parking lot entrance of the vehicle is located, the parking floor of the vehicle, and the latitude and longitude information of the vehicle on the road between the parking lot entrance and the parking floor. Wherein the parking floor of the vehicle is obtained according to the floor recognition result of the vehicle. The latitude and longitude information of the vehicle on the road from the entrance of the parking lot to the parking floor is obtained from the latitude and longitude information of the vehicle recorded in real time.
S206: determining parking information of the vehicle; the parking information of the vehicle comprises the parking floor of the vehicle, the parking longitude and latitude information of the vehicle and the track data before the vehicle is parked; the parking floor of the vehicle is obtained from the floor recognition result of the vehicle.
After the parking floor of the vehicle, the parking longitude and latitude information of the vehicle and the trajectory data before the vehicle is parked are obtained, the parking information of the vehicle can be determined. The user can seek the vehicle according to the parking information of the vehicle.
By the vehicle parking position identification method, the driving scene identification result of the vehicle is obtained. The driving scene recognition result comprises a first driving scene and a second driving scene. And acquiring a floor identification result of the vehicle. And determining the floor where the actual target parking lot entrance of the vehicle is located according to the driving scene recognition result and the floor recognition result. And acquiring longitude and latitude information of the vehicle, wherein the longitude and latitude information of the vehicle comprises the parking longitude and latitude information of the vehicle. And determining track data before the vehicle stops according to the floor where the actual target parking lot entrance of the vehicle is located, the floor recognition result when the vehicle runs and the longitude and latitude information of the vehicle. Parking position information of the vehicle is determined. The parking position information of the vehicle comprises a parking floor of the vehicle, parking longitude and latitude information of the vehicle and track data before the vehicle parks, wherein the parking floor of the vehicle is obtained according to a floor recognition result of the vehicle. By the method, the parking floor of the vehicle, the parking longitude and latitude information of the vehicle and the track data before the vehicle is parked can be obtained. The parking position of the vehicle in the multi-layer parking lot can be accurately known by using the acquired parking floor of the vehicle parking and the parking longitude and latitude information of the vehicle, so that the accurate parking position positioning is realized. By utilizing the obtained track data before the vehicle stops, the user can accurately and quickly find the vehicle, and the vehicle searching efficiency is improved.
The embodiment of the application also provides a vehicle parking position recognition device. Referring to fig. 9, fig. 9 is a schematic view of a vehicle parking position recognition apparatus according to an embodiment of the present application, the apparatus including:
a first acquisition unit 901 configured to acquire a driving scene recognition result of a vehicle; the driving scene recognition result comprises a first driving scene and a second driving scene;
a second obtaining unit 902, configured to obtain a floor recognition result of the vehicle;
a first determining unit 903, configured to determine, according to the driving scenario recognition result and the floor recognition result, a floor where an actual target parking lot entrance of the vehicle is located;
a third acquiring unit 904 for acquiring latitude and longitude information of the vehicle; the longitude and latitude information of the vehicle comprises parking longitude and latitude information of the vehicle;
a second determining unit 905, configured to determine trajectory data of the vehicle before parking according to a floor where an actual target parking lot entrance of the vehicle is located, a floor recognition result when the vehicle travels, and longitude and latitude information of the vehicle;
a third determination unit 906 for determining parking information of the vehicle; the parking information of the vehicle comprises a parking floor of the vehicle, parking longitude and latitude information of the vehicle and track data before the vehicle parks; and the parking floor of the vehicle is obtained according to the floor recognition result of the vehicle.
Optionally, in some implementations of embodiments of the present application, the first obtaining unit 901 includes:
the vehicle speed control device comprises a first acquisition subunit, a second acquisition subunit and a control unit, wherein the first acquisition subunit is used for acquiring steering wheel angle information of a vehicle in a first time period of the vehicle and vehicle speed information in the first time period;
the preprocessing subunit is used for preprocessing the steering wheel angle information of the vehicle in the first time period and the vehicle speed information in the first time period to acquire a sample coordinate point in the first time period;
the second acquiring subunit is configured to acquire a central point of the first driving scene, a central point of the second driving scene, and a central point of the third driving scene according to the sample coordinate point of the first time period;
a first determining subunit configured to determine a sample coordinate point of a second time period from among the sample coordinate points of the first time period; the first time period comprises the second time period;
the first classification subunit is used for classifying the sample coordinate points of the second time period based on the central point of the first driving scene, the central point of the second driving scene and the central point of the third driving scene to obtain a classification result; the classification result comprises a target sample coordinate point of a first driving scene and a target sample coordinate point of a second driving scene;
the judging subunit is used for judging whether the target sample coordinate point of the first driving scene is larger than the target sample coordinate point of the second driving scene;
the third obtaining subunit is used for obtaining a driving scene recognition result of the vehicle when the target sample coordinate point of the first driving scene is larger than the target sample coordinate point of the second driving scene; the driving scene recognition result of the vehicle is a first driving scene;
the fourth obtaining subunit is used for obtaining a driving scene recognition result of the vehicle when the target sample coordinate point of the first driving scene is not larger than the target sample coordinate point of the second driving scene; and the driving scene recognition result of the vehicle is a second driving scene.
Optionally, in some implementations of embodiments of the present application, the second obtaining subunit includes:
the second determining subunit is used for determining an initial central point of the first driving scene, an initial central point of the second driving scene and an initial central point of the third driving scene;
a second classification subunit configured to classify the sample coordinate points of the first time period into sample coordinate points of a first driving scene, sample coordinate points of a second driving scene, and sample coordinate points of a third driving scene based on an initial central point of the first driving scene, an initial central point of the second driving scene, and an initial central point of the third driving scene;
and the updating subunit is configured to update the initial central point of the first driving scene, the initial central point of the second driving scene and the initial central point of the third driving scene respectively according to the sample coordinate point of the first driving scene, the sample coordinate point of the second driving scene and the sample coordinate point of the third driving scene, and acquire the central point of the first driving scene, the central point of the second driving scene and the central point of the third driving scene.
Optionally, in some implementations of embodiments of the present application, the first determining unit 903 includes:
the fifth obtaining subunit is configured to obtain, from the floor recognition result, a floor where the vehicle is located when the vehicle travels the first distance; the first distance is greater than a first preset distance;
a sixth acquiring subunit, configured to acquire, from the driving scene recognition result, a driving scene in which the vehicle travels a first distance;
the third determining subunit is configured to, when the driving scene in which the vehicle travels the first distance is the first driving scene, determine that the floor where the vehicle travels the first distance is the floor where the first predicted target parking lot entrance is located;
a seventh acquiring subunit, configured to acquire, from the driving scene recognition result, a driving scene in which the vehicle travels the second distance;
an eighth obtaining subunit, configured to, when a driving scenario in which the vehicle travels the second distance is the first driving scenario, obtain, from the floor recognition result, a floor where the vehicle is located when the vehicle travels the second distance; the second distance is greater than a second preset distance; the second distance is a distance traveled by the vehicle after the vehicle traveled the first distance;
the fourth determining subunit is used for determining the change direction data of the first floor according to the floor where the vehicle runs for the first distance and the floor where the vehicle runs for the second distance;
a ninth acquiring subunit, configured to acquire second floor change direction data; the second floor change direction data is change direction data of a floor where the vehicle is located when the vehicle runs within a third distance;
a fifth determining subunit, configured to determine, when data that is the same as the first floor change direction data exists in the second floor change direction data, a floor where a second predicted target parking lot entrance is located from the data that is in the same direction as the first floor change direction data, and determine that the floor where the second preselected target parking lot entrance is located is a floor where an actual target parking lot entrance is located;
and a sixth determining subunit, configured to determine, when there is no data in the second floor change direction data, which is in the same direction as the first floor change direction, that the floor where the first predicted target parking lot entrance is located is the floor where the actual target parking lot entrance is located.
Optionally, in some implementations of embodiments of the present application, the second obtaining unit 902 includes:
a tenth acquiring subunit, configured to acquire a first floor recognition result of the vehicle when the driving scenario recognition result is the first driving scenario;
an eleventh acquiring subunit, configured to acquire a second floor recognition result of the vehicle when the driving scene recognition result is the second driving scene; the second floor recognition result is 1;
a twelfth acquiring subunit, configured to acquire a floor recognition result of the vehicle; the floor recognition result includes the first floor recognition result and the second floor recognition result.
Optionally, in some implementations of embodiments of the present application, the tenth acquiring subunit includes:
the first calculating subunit is configured to calculate, when the driving scene recognition result is the first driving scene, a road gradient corresponding to a fourth distance traveled by the vehicle when the vehicle travels the fourth distance; the fourth distance is greater than the first preset distance;
the first recording subunit is used for recording the height of the vehicle within the fourth distance when the slope corresponding to the fourth distance is greater than a first preset threshold;
a thirteenth acquiring subunit configured to acquire a fifth distance, which is a distance traveled by the vehicle after the vehicle traveled a fourth distance;
a seventh determining subunit operable to determine a road gradient at which the vehicle travels a fifth distance by calculating a vehicle height variation value per unit time;
the second recording subunit is used for recording the vehicle height when the vehicle travels the fifth distance when the road gradient when the vehicle travels the fifth distance is smaller than a second preset threshold value and the fifth distance is greater than the second preset distance;
a second calculation subunit configured to calculate a difference in height between a vehicle height at which the vehicle travels a fourth distance and a vehicle height at which the vehicle travels a fifth distance;
the eighth determining subunit is used for determining the number of newly added floors according to the height difference when the height difference is within the preset range;
a fourteenth obtaining subunit, configured to obtain, according to the floor where the vehicle is located when the vehicle travels the fourth distance and the number of newly added floors, the floor where the vehicle is located when the vehicle travels the fifth distance; and the floor where the vehicle is located when the vehicle runs the fifth distance is the first floor identification result when the vehicle runs.
Optionally, in some implementations of the embodiments of the present application, when the driving scenario recognition results before the vehicle travels the fourth distance are all the second driving scenarios, the floor where the vehicle travels the fourth distance is 1.
Optionally, in some implementations of embodiments of the present application, the apparatus further includes:
and the display unit is used for transmitting the parking information of the vehicle to a data background for storage so as to display the parking information of the vehicle on a user terminal based on the data background.
Through the vehicle parking position recognition device provided by the embodiment of the application, the parking floor of the vehicle parking, the parking longitude and latitude information of the vehicle and the track data before the vehicle parking can be obtained. The parking position of the vehicle in the multi-layer parking lot can be accurately known by using the acquired parking floor of the vehicle parking and the parking longitude and latitude information of the vehicle, so that the accurate parking position positioning is realized. By utilizing the acquired track data before the vehicle stops, the user can accurately and quickly find the vehicle, and the vehicle searching efficiency is improved.
It should be noted that, in the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. The method disclosed by the embodiment corresponds to the system disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the system part for description.
It should also be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising a … …" does not exclude the presence of another identical element in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. A vehicle parking position identification method, characterized by comprising:
acquiring a driving scene recognition result of a vehicle; the driving scene recognition result comprises a first driving scene and a second driving scene; the first driving scene is a parking lot scene, and the second driving scene is a non-parking lot scene;
acquiring a floor recognition result of the vehicle;
determining the floor where the actual target parking lot entrance of the vehicle is located according to the driving scene recognition result and the floor recognition result;
acquiring longitude and latitude information of the vehicle; the longitude and latitude information of the vehicle comprises parking longitude and latitude information of the vehicle;
determining track data before the vehicle stops according to the floor where the actual target parking lot entrance of the vehicle is located, the floor recognition result when the vehicle runs and the longitude and latitude information of the vehicle;
determining parking information of the vehicle; the parking information of the vehicle comprises a parking floor of the vehicle, parking longitude and latitude information of the vehicle and track data before the vehicle parks; the parking floor of the vehicle is obtained according to the floor recognition result of the vehicle; and sending the parking information of the vehicle to a user terminal so that a user can search the vehicle through the parking information of the vehicle in the user terminal.
2. The method of claim 1, wherein the obtaining driving scenario recognition results of the vehicle comprises:
acquiring steering wheel angle information of a vehicle in a first time period of the vehicle and vehicle speed information in the first time period;
preprocessing the steering wheel angle information of the vehicle in the first time period and the vehicle speed information in the first time period to obtain a sample coordinate point in the first time period;
acquiring a central point of a first driving scene, a central point of a second driving scene and a central point of a third driving scene according to the sample coordinate point of the first time period;
determining sample coordinate points of a second time period from the sample coordinate points of the first time period; the first time period comprises the second time period;
classifying the sample coordinate points of the second time period based on the central point of the first driving scene, the central point of the second driving scene and the central point of the third driving scene to obtain a classification result; the classification result comprises a target sample coordinate point of a first driving scene and a target sample coordinate point of a second driving scene;
judging whether the target sample coordinate point of the first driving scene is larger than the target sample coordinate point of the second driving scene;
if so, acquiring a driving scene recognition result of the vehicle; the driving scene recognition result of the vehicle is a first driving scene;
if not, acquiring a driving scene recognition result of the vehicle; and the driving scene recognition result of the vehicle is a second driving scene.
3. The method of claim 2, wherein obtaining the center point of the first driving scenario, the center point of the second driving scenario, and the center point of the third driving scenario from the sample coordinate points of the first time period comprises:
determining an initial center point of a first driving scene, an initial center point of a second driving scene and an initial center point of a third driving scene;
classifying the sample coordinate points of the first time period into sample coordinate points of a first driving scene, sample coordinate points of a second driving scene and sample coordinate points of a third driving scene based on an initial center point of the first driving scene, an initial center point of the second driving scene and an initial center point of the third driving scene;
and respectively updating the initial central point of the first driving scene, the initial central point of the second driving scene and the initial central point of the third driving scene according to the sample coordinate point of the first driving scene, the sample coordinate point of the second driving scene and the sample coordinate point of the third driving scene, and acquiring the central point of the first driving scene, the central point of the second driving scene and the central point of the third driving scene.
4. The method of claim 1, wherein the determining the floor at which the actual target parking lot entrance of the vehicle is located according to the driving scenario recognition result and the floor recognition result comprises:
obtaining the floor where the vehicle is located when the vehicle runs for a first distance from the floor identification result; the first distance is greater than a first preset distance;
acquiring a driving scene when the vehicle runs a first distance from the driving scene recognition result;
when the driving scene when the vehicle runs for the first distance is a first driving scene, determining that the floor where the vehicle runs for the first distance is the floor where the first prediction target parking lot entrance is located;
acquiring a driving scene when the vehicle runs a second distance from the driving scene recognition result;
when the driving scene when the vehicle runs for the second distance is the first driving scene, the floor where the vehicle runs for the second distance is obtained from the floor recognition result; the second distance is greater than a second preset distance; the second distance is a distance traveled by the vehicle after the vehicle traveled the first distance;
determining first floor change direction data according to the floor where the vehicle is located when the vehicle runs for the first distance and the floor where the vehicle is located when the vehicle runs for the second distance;
acquiring change direction data of a second floor; the second floor change direction data is change direction data of a floor where the vehicle is located when the vehicle runs within a third distance;
when the data which is the same as the first floor change direction data exists in the second floor change direction data, determining the floor where a second prediction target parking lot entrance is located from the data which is in the same direction as the first floor change direction data, and determining the floor where the second prediction target parking lot entrance is located as the floor where the actual target parking lot entrance is located;
and when the data in the same direction as the change direction of the first floor does not exist in the second floor change direction data, determining the floor where the first predicted target parking lot entrance is located as the floor where the actual target parking lot entrance is located.
5. The method of claim 1, wherein the obtaining a floor identification of the vehicle comprises:
when the driving scene recognition result is a first driving scene, obtaining a first floor recognition result of the vehicle;
when the driving scene recognition result is a second driving scene, acquiring a second floor recognition result of the vehicle; the second floor recognition result is 1;
acquiring a floor recognition result of a vehicle; the floor recognition result includes the first floor recognition result and the second floor recognition result.
6. The method of claim 5, wherein obtaining a first floor identification of the vehicle when the driving scenario identification is a first driving scenario comprises:
when the driving scene recognition result is a first driving scene, calculating a road gradient corresponding to a fourth distance when the vehicle runs for the fourth distance; the fourth distance is greater than the first preset distance;
when the slope corresponding to the fourth distance is larger than a first preset threshold value, recording the height of the vehicle within the fourth distance of the vehicle;
acquiring a fifth distance, wherein the fifth distance is the distance traveled by the vehicle after the vehicle travels a fourth distance;
determining a road gradient at which the vehicle travels a fifth distance by calculating a vehicle height change value per unit time;
when the road gradient of the vehicle running for a fifth distance is smaller than a second preset threshold value and the fifth distance is larger than the second preset distance, recording the height of the vehicle running for the fifth distance;
calculating a height difference between a vehicle height when the vehicle travels a fourth distance and a vehicle height when the vehicle travels a fifth distance;
when the height difference is within a preset range, determining the number of newly added floors according to the height difference;
according to the floor where the vehicle runs for the fourth distance and the number of the newly added floors, the floor where the vehicle runs for the fifth distance is obtained; and the floor where the vehicle is located when the vehicle runs the fifth distance is the first floor identification result when the vehicle runs.
7. The method according to claim 6, wherein when the driving scene recognition results before the vehicle travels the fourth distance are all the second driving scenes, the floor where the vehicle travels the fourth distance is 1.
8. The method of claim 1, further comprising:
and transmitting the parking information of the vehicle to a data background for storage, and displaying the parking information of the vehicle on a user terminal based on the data background.
9. A vehicle parking position recognition apparatus, characterized by comprising:
a first acquisition unit configured to acquire a driving scene recognition result of a vehicle; the driving scene recognition result comprises a first driving scene and a second driving scene; the first driving scene is a parking lot scene, and the second driving scene is a non-parking lot scene;
the second acquisition unit is used for acquiring a floor identification result of the vehicle;
the first determining unit is used for determining the floor where the actual target parking lot entrance of the vehicle is located according to the driving scene recognition result and the floor recognition result;
a third acquisition unit configured to acquire longitude and latitude information of the vehicle; the longitude and latitude information of the vehicle comprises parking longitude and latitude information of the vehicle;
the second determining unit is used for determining the track data before the vehicle stops according to the floor where the actual target parking lot entrance of the vehicle is located, the floor recognition result when the vehicle runs and the longitude and latitude information of the vehicle;
a third determination unit for determining parking information of the vehicle; the parking information of the vehicle comprises a parking floor of the vehicle, parking longitude and latitude information of the vehicle and track data before the vehicle parks; the parking floor of the vehicle is obtained according to the floor recognition result of the vehicle; and sending the parking information of the vehicle to a user terminal so that a user can search the vehicle through the parking information of the vehicle in the user terminal.
CN202011478089.4A 2020-12-15 2020-12-15 Vehicle parking position identification method and device Active CN114639263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011478089.4A CN114639263B (en) 2020-12-15 2020-12-15 Vehicle parking position identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011478089.4A CN114639263B (en) 2020-12-15 2020-12-15 Vehicle parking position identification method and device

Publications (2)

Publication Number Publication Date
CN114639263A CN114639263A (en) 2022-06-17
CN114639263B true CN114639263B (en) 2023-02-24

Family

ID=81945480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011478089.4A Active CN114639263B (en) 2020-12-15 2020-12-15 Vehicle parking position identification method and device

Country Status (1)

Country Link
CN (1) CN114639263B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117334038A (en) * 2022-06-24 2024-01-02 华为技术有限公司 Parking floor determining method, electronic equipment, server and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007241472A (en) * 2006-03-06 2007-09-20 Toyota Motor Corp Car finder, car finder operation terminal, car finder control system and car finder control method
CN101937619A (en) * 2010-09-28 2011-01-05 无锡市天业智能科技有限公司 Parking navigation and finding system based on object networking wireless sensing and video perception
CN105719506A (en) * 2014-12-04 2016-06-29 深圳Tcl数字技术有限公司 Vehicle-searching guide method and system based on network interconnection
CN106067259A (en) * 2016-06-30 2016-11-02 上海斐讯数据通信技术有限公司 Car system and method is sought in location, a kind of parking lot
CN106846876A (en) * 2017-03-14 2017-06-13 广东数相智能科技有限公司 A kind of car searching method based on video identification and Quick Response Code, system and device
CN108492549A (en) * 2018-04-18 2018-09-04 北京山和朋友们科技有限公司 A kind of vehicle parking location recognition method and vehicle parking position-recognizing system
CN109121084A (en) * 2018-10-08 2019-01-01 广州小鹏汽车科技有限公司 Vehicle, mobile terminal and its car searching method, device and system
US10636305B1 (en) * 2018-11-16 2020-04-28 Toyota Motor North America, Inc. Systems and methods for determining parking availability on floors of multi-story units
CN111192470A (en) * 2020-01-03 2020-05-22 深圳市星砺达科技有限公司 Parking lot parking layer positioning method and device, computer equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017128078A1 (en) * 2016-01-26 2017-08-03 刘建兵 Data acquisition method for vehicle finding technology using mobile phone, and vehicle finding system
CN107862899A (en) * 2017-11-16 2018-03-30 深圳市小猫信息技术有限公司 Reverse car seeking method, device, computer installation and computer-readable recording medium
CN110610250B (en) * 2018-06-15 2023-07-07 阿里巴巴集团控股有限公司 Method, device and equipment for recommending and prompting idle parking spaces and vehicle searching route
CN109377779A (en) * 2018-09-27 2019-02-22 盯盯拍(深圳)云技术有限公司 Parking lot car searching method and parking lot car searching device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007241472A (en) * 2006-03-06 2007-09-20 Toyota Motor Corp Car finder, car finder operation terminal, car finder control system and car finder control method
CN101937619A (en) * 2010-09-28 2011-01-05 无锡市天业智能科技有限公司 Parking navigation and finding system based on object networking wireless sensing and video perception
CN105719506A (en) * 2014-12-04 2016-06-29 深圳Tcl数字技术有限公司 Vehicle-searching guide method and system based on network interconnection
CN106067259A (en) * 2016-06-30 2016-11-02 上海斐讯数据通信技术有限公司 Car system and method is sought in location, a kind of parking lot
CN106846876A (en) * 2017-03-14 2017-06-13 广东数相智能科技有限公司 A kind of car searching method based on video identification and Quick Response Code, system and device
CN108492549A (en) * 2018-04-18 2018-09-04 北京山和朋友们科技有限公司 A kind of vehicle parking location recognition method and vehicle parking position-recognizing system
CN109121084A (en) * 2018-10-08 2019-01-01 广州小鹏汽车科技有限公司 Vehicle, mobile terminal and its car searching method, device and system
US10636305B1 (en) * 2018-11-16 2020-04-28 Toyota Motor North America, Inc. Systems and methods for determining parking availability on floors of multi-story units
CN111192470A (en) * 2020-01-03 2020-05-22 深圳市星砺达科技有限公司 Parking lot parking layer positioning method and device, computer equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
赵一州.大中型停车场泊车引导及寻车系统研究.《优秀硕士论文全文库 工程科技Ⅱ辑》.2020,1-87. *

Also Published As

Publication number Publication date
CN114639263A (en) 2022-06-17

Similar Documents

Publication Publication Date Title
US10657804B2 (en) Updating maps and road status
US10300916B2 (en) Autonomous driving assistance system, autonomous driving assistance method, and computer program
US7805240B2 (en) Driving behavior prediction method and apparatus
CN102208035B (en) Image processing system and position measuring system
CN102208011B (en) Image processing system and vehicle control system
JP2020053094A (en) Method and device for determining lane identifier on road
US7584050B2 (en) Automobile navigation system
CN102208013A (en) Scene matching reference data generation system and position measurement system
CN102208012A (en) Scene matching reference data generation system and position measurement system
CN114375467A (en) Detection of emergency vehicles
CN102222236A (en) Image processing system and position measurement system
CN107850456A (en) Path guiding device and path guide method
EP3823321A1 (en) Method, apparatus, and system for detecting joint motion
CN103847733A (en) Technique and apparatus for assisting driving a vehicle
CN108466621A (en) effective rolling radius
CN110849382A (en) Driving duration prediction method and device
CN104316069A (en) Vehicle-mounted navigation device and navigation method for recognizing main road and auxiliary road
US8234065B2 (en) Vehicle navigation apparatus and method
US11238735B2 (en) Parking lot information management system, parking lot guidance system, parking lot information management program, and parking lot guidance program
CN113743469A (en) Automatic driving decision-making method fusing multi-source data and comprehensive multi-dimensional indexes
CN114639263B (en) Vehicle parking position identification method and device
CN110622228A (en) Method, device and computer-readable storage medium with instructions for determining a traffic regulation applicable to a motor vehicle
CN112525207B (en) Unmanned vehicle positioning method based on vehicle pitch angle map matching
CN114792149A (en) Track prediction method and device and map
JP2011232271A (en) Navigation device, accuracy estimation method for on-vehicle sensor, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant