CN114199262A - Method for training position recognition model, position recognition method and related equipment - Google Patents

Method for training position recognition model, position recognition method and related equipment Download PDF

Info

Publication number
CN114199262A
CN114199262A CN202010884127.XA CN202010884127A CN114199262A CN 114199262 A CN114199262 A CN 114199262A CN 202010884127 A CN202010884127 A CN 202010884127A CN 114199262 A CN114199262 A CN 114199262A
Authority
CN
China
Prior art keywords
overhead
elevated
sample data
satellite
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010884127.XA
Other languages
Chinese (zh)
Inventor
张小兵
徐海良
杨减
孙佳
李磊云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010884127.XA priority Critical patent/CN114199262A/en
Publication of CN114199262A publication Critical patent/CN114199262A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/005Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 with correlation of navigation data from several sources, e.g. map or contour matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • G01C21/30Map- or contour-matching
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/38Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system
    • G01S19/39Determining a navigation solution using signals transmitted by a satellite radio beacon positioning system the satellite radio beacon positioning system transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/42Determining position
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Automation & Control Theory (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the invention provides a method for training a position recognition model, a position recognition method and related equipment, wherein the method for training the position recognition model comprises the following steps: acquiring vehicle position data of an elevated area and an area near the elevated area as sample data, wherein the sample data is marked to be positioned on the elevated or positioned under the elevated; determining characteristics of the sample data; and training a machine learning model by taking the characteristics of the sample data and the marks of the sample data as training data to obtain a position identification model suitable for the overhead scene. Based on the trained position recognition model, the embodiment of the invention can provide a basis for accurately recognizing whether the vehicle position is positioned on an overhead or under the overhead.

Description

Method for training position recognition model, position recognition method and related equipment
Technical Field
The embodiment of the invention relates to the technical field of positioning, in particular to a method for training a position recognition model, a position recognition method and related equipment.
Background
As one of the basic roads, it is a very common phenomenon to provide navigation services for vehicles in an elevated scene. In order to accurately provide navigation services for vehicles in an elevated scene, it is necessary to identify whether the vehicle is located on an elevated road (referred to as elevated top) or a road laid along the ground under the elevated road (referred to as elevated bottom).
If it is not accurately recognized whether the vehicle is located on the overhead or under the overhead, it is highly likely that a wrong navigation route planning result is received when a user located on the overhead or under the overhead makes a navigation route planning request, because the road connected to the overhead road and the road connected to the road laid along the ground are sometimes different. Therefore, accurately identifying whether a user is located overhead or under overhead is a problem that has been addressed in the field of location technology.
Disclosure of Invention
In view of the above, embodiments of the present invention provide a method for training a position recognition model, a position recognition method, a navigation method, and related devices, where the method for training the position recognition model can train the position recognition model suitable for an overhead scene, so as to provide a basis for accurately recognizing whether a vehicle position is located on an overhead or under an overhead, and the position recognition method can accurately recognize whether the vehicle position is located on an overhead or under an overhead by using the position recognition model.
In order to achieve the above purpose, the embodiments of the present invention provide the following technical solutions:
a method of training a location recognition model, comprising:
acquiring vehicle position data of an elevated area and an area near the elevated area as sample data, wherein the sample data is marked to be positioned on the elevated or positioned under the elevated;
determining characteristics of the sample data;
and training a machine learning model by taking the characteristics of the sample data and the marks of the sample data as training data to obtain a position identification model suitable for the overhead scene.
The embodiment of the invention also provides a position identification method, which is used for identifying whether the position of a vehicle is positioned on an overhead or under the overhead based on the position identification model trained by the method, and comprises the following steps:
acquiring a vehicle position, and determining a characteristic corresponding to the vehicle position;
inputting the characteristics corresponding to the vehicle position into a position identification model in model data to obtain a characteristic vector of the vehicle position output by the position identification model;
and obtaining the identification result of whether the vehicle position is positioned on the overhead or under the overhead according to the characteristic vector of the vehicle position.
The embodiment of the invention also provides a device for training the position recognition model, which comprises:
the system comprises a sample data acquisition module, a data processing module and a data processing module, wherein the sample data acquisition module is used for acquiring vehicle position data of an elevated area and an area near the elevated area as sample data, and the sample data is marked to be positioned on the elevated or positioned under the elevated;
a characteristic determination module for determining characteristics of the sample data;
and the training execution module is used for training the machine learning model by taking the characteristics of the sample data and the marks of the sample data as training data so as to obtain the position recognition model suitable for the overhead scene.
An embodiment of the present invention further provides a training server, including: at least one memory storing one or more computer-executable instructions and at least one processor invoking the one or more computer-executable instructions to perform a method of training a position recognition model as described above.
An embodiment of the present invention further provides a position identification apparatus, including:
the vehicle position characteristic determining module is used for acquiring a vehicle position and determining a characteristic corresponding to the vehicle position;
the characteristic vector determining module is used for inputting the characteristics corresponding to the vehicle position into a position identification model in model data to obtain the characteristic vector of the vehicle position output by the position identification model;
and the identification result obtaining module is used for obtaining the identification result of whether the vehicle position is positioned on the high frame or under the high frame according to the characteristic vector of the vehicle position.
An embodiment of the present invention further provides a vehicle-mounted terminal, including: at least one memory storing one or more computer-executable instructions and at least one processor invoking the one or more computer-executable instructions to perform a location identification method as described above.
Embodiments of the present invention further provide a storage medium, where the storage medium stores one or more computer-executable instructions, where the one or more computer-executable instructions are configured to execute the method for training a position recognition model as described above, or execute the position recognition method as described above.
According to the method for training the position recognition model, provided by the embodiment of the invention, the vehicle position data of the elevated area and the area near the elevated area are used as sample data, and a mark is set for the sample data, so that the sample data is marked to be positioned on the elevated or under the elevated; therefore, the embodiment of the invention can determine the characteristics of the sample data, and train the machine learning model by taking the characteristics of the sample data and the marks of the sample data as training data to obtain the position identification model suitable for the overhead scene. According to the embodiment of the invention, the vehicle position data of the elevated area and the area near the elevated area are used as sample data, and the mark of the sample data on the elevated area or under the elevated area is used as a training guide to train the machine learning model based on the characteristics of the sample data so as to obtain the position identification model, so that the obtained position identification model can output the position identification result corresponding to the mark of the vehicle position data based on the characteristics of the vehicle position data, namely the position identification model has the function of identifying whether the vehicle position data is on the elevated area or under the elevated area; therefore, the method for training the position recognition model provided by the embodiment of the invention can be used for training the position recognition model for recognizing whether the position of the vehicle is on the overhead or under the overhead, namely training the position recognition model suitable for the overhead scene, so that the position recognition model can be used for accurately recognizing whether the position data of the vehicle is on the overhead or under the overhead based on the characteristics of the position data of the vehicle, and providing a basis for accurately recognizing whether the position of the vehicle is on the overhead or under the overhead.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a block diagram of a system provided by an embodiment of the present invention;
FIG. 2 is a schematic diagram of a training phase provided by an embodiment of the present invention;
FIG. 3 is a flowchart of a method for training a position recognition model according to an embodiment of the present invention;
FIG. 4 is a flow chart of mining an area according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating a merge area according to an embodiment of the present invention;
FIG. 6 is an exemplary diagram of an elevated area and an area near the elevated area provided by an embodiment of the present invention;
FIG. 7 is a flow chart of determining a marker for sample data according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating an example of a process for historical navigation routes and map-matched routes provided by an embodiment of the present invention;
FIG. 9 is a flow chart for characterizing sample data according to an embodiment of the present invention;
FIG. 10 is a flow chart of determining satellite distribution characteristics according to an embodiment of the present invention;
FIG. 11 is an exemplary illustration of a default plane provided by an embodiment of the present invention;
FIG. 12 is a flow chart of determining a historical travel speed characteristic provided by an embodiment of the present invention;
FIG. 13 is a diagram illustrating an example of the division of a unit group according to an embodiment of the present invention;
FIG. 14 is a flow chart of training execution provided by an embodiment of the present invention;
FIG. 15 is a diagram illustrating an exemplary structure of a machine learning model according to an embodiment of the present invention;
fig. 16 is a flowchart of a location identification method according to an embodiment of the present invention;
FIG. 17 is a flowchart of calculating an overhead upper and lower confidence threshold according to an embodiment of the present invention;
FIG. 18 is a flowchart of navigation interaction in a starting point road-catching scenario according to an embodiment of the present invention;
FIG. 19 is a flow chart illustrating navigation interaction in a yaw recognition scenario, according to an embodiment of the present invention;
FIG. 20 is a block diagram of an apparatus for training a position recognition model according to an embodiment of the present invention;
FIG. 21 is another block diagram of an apparatus for training a position recognition model according to an embodiment of the present invention;
FIG. 22 is a block diagram of a training server provided by an embodiment of the present invention;
fig. 23 is a block diagram of a position recognition apparatus according to an embodiment of the present invention;
fig. 24 is another block diagram of a position recognition device provided in the embodiment of the present invention;
fig. 25 is a further block diagram of a position identification device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to accurately identify whether the vehicle position is located on the overhead or under the overhead, the embodiment of the invention provides a technical scheme to train a position identification model capable of identifying whether the vehicle position is located on the overhead or under the overhead, so that in the driving process of the vehicle, the embodiment of the invention can identify whether the vehicle position is located on the overhead or under the overhead based on the trained position identification model; further, the embodiment of the invention can obtain the navigation route of the vehicle at least based on the identification result.
In an alternative implementation, as shown in fig. 1, the embodiment of the present invention may complete the above process through interaction among the training server 10, the in-vehicle terminal 20, and the navigation server 30.
The training server 10 may be a server device configured in the embodiment of the present invention for training to obtain the location identification model, where the location identification model may be obtained by training a machine learning model, and the form of the machine learning model is, for example, RNN (recurrent neural network), and the like, and the embodiment of the present invention is not limited thereto; in an alternative implementation, the training server may be deployed on the network side;
the in-vehicle terminal 20 may be a terminal device disposed on a vehicle; in optional implementation, the vehicle-mounted terminal may be a vehicle-mounted navigation device (such as a vehicle-mounted intelligent navigation device), and the mounting mode of the vehicle-mounted navigation device on the vehicle may support front loading or rear loading; in another optional implementation, the vehicle-mounted terminal may be a terminal device independent of the vehicle but interconnected with the vehicle, for example, the terminal device may be a smartphone, a tablet computer, or the like of the user; in the embodiment of the invention, the vehicle-mounted terminal can obtain model data provided by the training server, wherein the model data can comprise a position identification model obtained by training of the training server, so that the vehicle-mounted terminal can identify whether the position of the vehicle is on an overhead or under the overhead based on the position identification model;
the navigation server 30 may be a server device that provides a navigation service, and the navigation server may be deployed on the network side, and when the in-vehicle terminal 20 obtains an identification result of whether the vehicle position is on an overhead or under an overhead, the navigation server 30 may acquire the identification result, thereby providing a navigation route of the vehicle based on at least the identification result.
From the above description, it can be seen that the position identification model obtained by training is the basis for accurately identifying whether the vehicle position is located on the overhead or under the overhead in the embodiment of the present invention; in an alternative implementation, as shown in fig. 2, the stage of training the server to obtain the location recognition model can be divided into 4 stages, including: the method comprises a region mining stage, a sample marking stage, a characteristic determining stage and a training executing stage.
The region mining stage can be regarded as a preprocessing stage of training and is mainly used for mining an elevated area and an elevated nearby area from a map, the elevated area and the elevated nearby area can be regarded as map areas related to an elevated road, and for the mined elevated area and the elevated nearby area, vehicle position data in the elevated area and the elevated nearby area can be used as sample data used for training;
the sample marking stage is mainly used for marking sample data so as to mark that the sample data is positioned on an overhead or under the overhead;
the characteristic determining stage is mainly used for determining the characteristics of the sample data;
in the training execution stage, a machine learning model is trained mainly based on the characteristics of sample data and the marks of the sample data to obtain a position identification model suitable for an overhead scene, namely the obtained position identification model can identify whether the position of the vehicle is on an overhead or under the overhead.
Optionally, on the basis that the training server excavates the overhead area and the area near the overhead area in advance, fig. 3 illustrates an optional process of the method for training the location recognition model according to the embodiment of the present invention, where the process may be executed by the training server, and with reference to fig. 3, the process may include:
step S100, vehicle position data of an elevated area and an area near the elevated area are obtained as sample data, and the sample data are marked to be positioned on the elevated or positioned under the elevated.
Optionally, the vehicle position data may be historical vehicle position data, for example, during the historical driving process of the vehicle, the vehicle-mounted terminal may locate the vehicle position data of the vehicle at each historical driving time point through a satellite positioning technology; in the process of one-time historical driving of the vehicle, the vehicle position data of each historical driving time point can form a one-time historical driving track of the vehicle; the vehicle-mounted terminal can upload the vehicle position data of each historical driving time point to the network side, so that the training server can collect the vehicle position data of the vehicle at each historical driving time point. It will be appreciated that if multiple vehicles have taken multiple historical trips, the training server may collect a large amount of historical vehicle location data.
Based on that a training server determines an elevated area and an elevated vicinity area from a map in advance, the embodiment of the invention can match historical vehicle position data with the elevated area and the elevated vicinity area to acquire the vehicle position data of the elevated area and the elevated vicinity area, and take the vehicle position data of the elevated area and the elevated vicinity area as sample data used for training, that is, if one piece of historical vehicle position data is located in the elevated area, the vehicle position data can be taken as one sample data, and if one piece of historical vehicle position data is located in the elevated vicinity area, the vehicle position data can also be taken as one sample data; the elevated area may be considered to be an area where an elevated road is located, and the area near the elevated area may be considered to be an area where a non-elevated road near the elevated road is located, and generally, the elevated road and the non-elevated road near the elevated road are in communication with each other.
It is understood that there may be more than one vehicle location data located in one elevated area or in an elevated vicinity, and thus, embodiments of the present invention may obtain more than one sample data.
The embodiment of the invention can set a mark for the sample data to mark that the sample data is positioned on the overhead or under the overhead. Optionally, in the embodiment of the present invention, a part of the road with a height greater than or equal to a half of a sum of the height of the starting point and the height of the ending point of the road may be regarded as being above, and conversely, a part of the road with a height less than a half of a sum of the height of the starting point and the height of the ending point of the road may be regarded as being below. The embodiment of the invention can set the mark for the sample data through an automatic processing means, and can also set the mark for the sample data through other modes, such as manually setting the mark for the sample data.
And step S110, determining the characteristics of the sample data.
After the sample data used for training is obtained, since the training of the machine learning model requires the use of the features of the sample data, in this step, the embodiment of the present invention may determine the features of the sample data. In an optional implementation, the characteristics of the sample data may at least include an elevated road segment corresponding to the sample data, in an embodiment of the present invention, an elevated road in an elevated area may be segmented into more than two elevated road segments in advance, and the embodiment of the present invention may determine the elevated road segment corresponding to the sample data, and at least use the elevated road segment corresponding to the sample data as the characteristics of the sample data.
Optionally, in the embodiment of the present invention, the elevated road in the elevated area may be segmented into a plurality of elevated road segments, and for the obtained sample data, the embodiment of the present invention may determine the elevated road segment corresponding to the sample data; in an optional specific implementation, the embodiment of the present invention may determine a road where the sample data matches (i.e., the sample data matches the road), and if the road where the sample data matches is an elevated road in an elevated area (no matter the road is an elevated lower road or an elevated upper road in the elevated area), the embodiment of the present invention may determine, in the elevated road where the sample data matches, that the sample data is in an elevated road section corresponding to the elevated road;
if the road matched with the sample data is a non-elevated road in the area near the elevated road, the sample data is not on the elevated road at the moment, and the elevated road section corresponding to the sample data can be determined by projecting the sample data on the elevated road mapped by the non-elevated road; optionally, in the embodiment of the present invention, mapping data of an elevated road may be set, where mapping data of one elevated road may store one non-elevated road and a co-directional elevated road mapped by the non-elevated road, and there may be a plurality of elevated roads mapped by one non-elevated road, so that in the embodiment of the present invention, when it is determined that a road matched with sample data is a non-elevated road, an elevated road mapped by a non-elevated road may be determined based on the mapping data, and then the sample data is projected onto an elevated road mapped by the non-elevated road, to obtain at least one projection point of the sample data (since there may be one or more elevated roads mapped by a non-elevated road, the number of projection points may be one or more); after the at least one projection point is obtained, the embodiment of the present invention may determine a closest projection point closest to the projection distance of the sample data from the at least one projection point, so as to determine an elevated road segment where the closest projection point is located as an elevated road segment corresponding to the sample data;
in the embodiment of the present invention, it can be seen that, no matter whether the sample data is located in the elevated road in the elevated area, as long as the sample data is located in the elevated area or the area near the elevated area, the embodiment of the present invention can determine the elevated road section corresponding to the sample data, so that at least the elevated road section corresponding to the sample data is used as the feature of the sample data.
Optionally, the elevated road segment may be represented by an identifier of the elevated road segment, and the determining of the elevated road segment corresponding to the sample data may be determining of the identifier of the elevated road segment corresponding to the sample data. In a further optional implementation, the characteristics of the sample data may further include: the projection distance from the sample data to the elevated road in the elevated area (if the road matched with the sample data is a non-elevated road, the projection distance may be the projection distance of the nearest projection point), the satellite distribution characteristic corresponding to the sample data, the historical driving speed characteristic corresponding to the sample data, and the like.
And S120, training a machine learning model by taking the characteristics of the sample data and the marks of the sample data as training data to obtain a position identification model suitable for the overhead scene.
After the characteristics of the sample data are determined and the marks are set for the sample data, the embodiment of the invention can input the characteristics of the sample data into the machine learning model to obtain the characteristic vector output by the machine learning model, and iteratively adjust the parameters of the machine learning model under the guidance of the marks of the sample data, so that the position identification result corresponding to the characteristic vector of the sample data output by the machine learning model (namely the identification result of the sample data on an overhead or under the overhead) is matched with the marks of the sample data to obtain the position identification model suitable for the overhead scene.
According to the method for training the position recognition model, provided by the embodiment of the invention, the vehicle position data of the elevated area and the area near the elevated area are used as sample data, and a mark is set for the sample data, so that the sample data is marked to be positioned on the elevated or under the elevated; therefore, the embodiment of the invention can determine the characteristics of the sample data, and train the machine learning model by taking the characteristics of the sample data and the marks of the sample data as training data to obtain the position identification model suitable for the overhead scene. According to the embodiment of the invention, the vehicle position data of the elevated area and the area near the elevated area are used as sample data, and the mark of the sample data on the elevated area or under the elevated area is used as a training guide to train the machine learning model based on the characteristics of the sample data so as to obtain the position identification model, so that the obtained position identification model can output the position identification result corresponding to the mark of the vehicle position data based on the characteristics of the vehicle position data, namely the position identification model has the function of identifying whether the vehicle position data is on the elevated area or under the elevated area; therefore, the method for training the position recognition model provided by the embodiment of the invention can be used for training the position recognition model for recognizing whether the position of the vehicle is on the overhead or under the overhead, namely training the position recognition model suitable for the overhead scene, so that the position recognition model can be used for accurately recognizing whether the position data of the vehicle is on the overhead or under the overhead based on the characteristics of the position data of the vehicle, and providing a basis for accurately recognizing whether the position of the vehicle is on the overhead or under the overhead.
To describe the position recognition model training scheme provided by the embodiment of the present invention in more detail, the following describes in detail the execution process of each training phase based on each training phase shown in fig. 2.
In an alternative implementation, fig. 4 shows an alternative flow for mining an overhead area and an area near the overhead area, which is provided by the embodiment of the present invention, and the flow shown in fig. 4 may be considered as a flow executed by a server in an area mining phase, as shown in fig. 4, the flow may include:
step S200, cutting the elevated road and the ordinary road around the elevated road into a plurality of road segments, wherein the road segments comprise elevated road segments and non-elevated road segments.
The embodiment of the invention can cut the elevated road and the ordinary road around the elevated road according to the preset first length, so as to cut the elevated road and the ordinary road around the elevated road into a plurality of road segments, wherein the length of one road segment can be the first length, and the specific number of the first lengths can be set according to the actual situation, for example, the first length is 5 meters. In an example, the embodiment of the invention may cut an elevated road in a national or urban road and a common road around the elevated road according to a preset first length to obtain a plurality of road segments; in a further example, the embodiment of the present invention may filter roads without elevated roads around the roads from national roads or urban roads, so as to cut the filtered roads according to a preset first length, thereby obtaining a plurality of road segments.
The road segments may include elevated road segments and non-elevated road segments, the elevated road segments may be road segments corresponding to elevated roads in the cut road segments, the non-elevated road segments may be road segments corresponding to common roads in the cut road segments, and the number of the elevated road segments and the non-elevated road segments in the road segments may be multiple.
Step S210, for any elevated road segment, a road segment group is formed by the elevated road segment and non-elevated road segments associated around the elevated road segment.
For any one of the elevated road segments, the embodiment of the present invention may determine a non-elevated road segment associated around the elevated road segment, and in an optional implementation, for any one of the elevated road segments, the embodiment of the present invention may determine a non-elevated road segment that is matched with the traveling direction of the elevated road segment and has a distance within a preset distance, as a non-elevated road segment associated around the elevated road segment;
in a more specific implementation, the proceeding direction matching of the road segments may be regarded as that the angle difference of the traveling direction of the road is within a preset angle range, and the preset angle range may be set according to an actual situation, for example, an angle range of 0 to 45 degrees; the distance between the road segments may be a distance between center points of the road segments, or may be set as a distance between start points or a distance between end points of the road segments, and may be specifically set according to an actual situation; in an example, for any one of the elevated road segments, as the non-elevated road segment associated with the periphery of the elevated road segment, a non-elevated road segment whose angle difference with the traveling direction of the elevated road segment is in an angle range of 0 to 45 degrees and whose distance is within a preset distance, the preset distance is, for example, 30 meters, and the specific value may be set according to actual conditions, and the embodiment of the present invention is not limited.
For any elevated road segment, the embodiment of the present invention forms a road segment group corresponding to the elevated road segment by using the elevated road segment and non-elevated road segments associated with the elevated road segment, that is, one road segment group includes one elevated road segment and non-elevated road segments associated with the elevated road segment; therefore, all elevated road segments are processed in the same way, and the embodiment of the invention can form the road segment groups corresponding to all elevated road segments to obtain a plurality of road segment groups.
And step S220, determining a map basic area corresponding to the road segment group.
In an optional implementation, for each road segment group, the embodiment of the present invention may determine a minimum circumscribed rectangle of the road segment group, and determine a map basic area corresponding to the road segment group based on the minimum circumscribed rectangle of the road segment group, thereby obtaining a plurality of map basic areas.
Optionally, in the embodiment of the present invention, the minimum circumscribed rectangle of the road segment group may be directly determined as the map basic area corresponding to the road segment group; in another optional implementation, for each road segment group, after the minimum circumscribed rectangle of the road segment group is determined, the width and the height of the minimum circumscribed distance of the road segment group may be respectively increased by a preset length, so as to form a map base area corresponding to the road segment group, where the preset length may be set according to an actual situation, for example, 60 meters, and the embodiment of the present invention is not limited.
It will be appreciated that one road segment group may correspond to one map base area, and thus embodiments of the present invention may determine a plurality of map base areas for a plurality of road segment groups.
And step S230, carrying out area merging processing based on map basic area iteration until the area utilization rate of the map basic area in the merged area is maximized, and taking the merged area as an elevated area and an elevated nearby area.
After a plurality of map basic areas can be obtained, in order to improve the efficiency of a subsequent training position identification model, the embodiment of the invention takes the map basic areas as the basis and carries out area merging treatment in an iterative manner, so that the area utilization rate of the map basic areas in the merged areas is maximized; specifically, in the process of iteratively performing region merging, the embodiment of the present invention needs to avoid increasing the area of the non-map basic region, so that the area utilization rate of the map basic region in the merged region can be maximized when merging is finished; the area utilization of the map base area in the merged area may be: the ratio of the area of the map base region in the merged region to the area of the merged region.
In one example, as shown in fig. 5, of the map base areas A, B and C, if the map base areas a and B are combined, the area of the combined area is the sum of the areas of the reference areas a and B, the area utilization rate of the map base area in the combined area is large, whereas if the map base areas a and C are combined, there will be a large part of the area in the combined area that does not belong to the map base area, and the area utilization rate of the map base area is low; as can be seen, in the above example, the merged map basic areas a and B can improve the area utilization rate of the map basic area compared with the merged map basic areas a and C, so in the process of merging the map basic areas, the embodiment of the present invention should avoid the area merging process of increasing the area of the non-map basic area, and preferentially use the area merging process of improving the area utilization rate of the map basic area, so as to maximize the area utilization rate of the map basic area in the merged area.
After the area merging process is finished, one merged area may be regarded as one elevated area and an elevated vicinity area in the embodiment of the present invention, and for example, the block shown in fig. 6 may be regarded as an example of the elevated area and the elevated vicinity area obtained in the embodiment of the present invention, and reference may be made to this.
According to the embodiment of the invention, the constraint condition that the area utilization rate of the map basic area reaches the maximum is taken as the constraint condition, the area merging processing is carried out based on the iteration of the map basic area, so that the elevated area and the area near the elevated which are obtained after the areas are merged have higher correlation with the elevated road, the correlation between the sample data which is subsequently determined and is positioned in the elevated area and the area near the elevated and the elevated road is improved, and a foundation is provided for accurately determining the sample data and improving the training efficiency.
After the elevated area and the area near the elevated are excavated, the embodiment of the invention can determine the historical vehicle position data in the elevated area and the area near the elevated as sample data, so as to set a mark for the sample data. In an alternative implementation, fig. 7 shows a flowchart for determining a marker of sample data according to an embodiment of the present invention, where the flowchart shown in fig. 7 may be regarded as a flowchart executed by a training server in a sample marking stage, and based on the flowchart shown in fig. 7, an embodiment of the present invention may automatically set a marker for the sample data, as shown in fig. 7, the flowchart may include:
and step S300, obtaining a map matching route and a historical navigation route corresponding to the sample data.
In optional implementation, the embodiment of the present invention may obtain a historical driving track of a vehicle corresponding to sample data (i.e., a historical driving track of a vehicle including the sample data), and match the historical driving track by using a map matching algorithm to obtain a map matching route corresponding to the sample data; meanwhile, the historical navigation route corresponding to the sample data can be obtained, namely the navigation route provided by the navigation service in the historical driving process of the vehicle corresponding to the sample data, and the historical navigation route can be a plurality of intermittent navigation routes because the vehicle can deviate from the navigation route in the historical driving process.
Step S310, determining whether the sample data identified in the map matching route is located on an overhead or under an overhead identification result, and whether the sample data identified in the historical navigation route is located on an overhead or under an overhead identification result.
Step S320, if the map matching route is different from the identification result in the historical navigation route, setting a mark by taking the identification result in the historical navigation route as the sample data.
After the historical driving track of the sample data is matched to the road in the road network by using a map matching algorithm to obtain a map matching route corresponding to the sample, although the map matching algorithm is a very mature method, in an elevated road area, the map matching algorithm cannot accurately match the position of the vehicle to the road on which the vehicle actually drives only by depending on information such as position, speed, road network topology and the like; meanwhile, through data statistics, the probability that the vehicle deviates from the navigation route on the elevated road is lower than the matching error of the map matching algorithm on the elevated road, for example, the probability that the vehicle deviates from the navigation route on the elevated road is about 3%, and the matching error of the map matching algorithm on the elevated road is about 10%, so that for the elevated road area, the embodiment of the invention preferentially trusts the identification result of whether the position of the vehicle identified in the historical navigation route is on the elevated road or under the elevated road;
in the embodiment of the invention, in the history matching route corresponding to the sample data, an identification result for identifying whether the sample data is positioned on an overhead or under the overhead exists, in the history navigation route corresponding to the sample data, an identification result for identifying whether the sample data is positioned on the overhead or under the overhead exists, and when the identification result in the history matching route is different from the identification result in the history navigation route, the identification result in the history navigation route is trusted, and the identification result in the history navigation route is used as the sample data to set a mark; for example, as shown in fig. 8, the black solid line in fig. 8 represents the historical navigation route, and the black dotted line represents the map matching route, if the route planned by the historical navigation route is on the overhead and the map matching route matched by the map matching algorithm is under the overhead, the embodiment of the present invention believes that the vehicle position is on the overhead; similarly, also when the route planned by the historical navigation route is under-overhead and the route indicated by the map-matched route is on-overhead, embodiments of the present invention believe that the vehicle location is under-overhead.
After the map matching route and the historical navigation route corresponding to the sample data are obtained, the embodiment of the invention sets the mark for the sample data in a mode of preferentially believing the historical navigation route, so that the mark of the sample data has higher accuracy, the mark of whether the more accurate sample data is positioned on an overhead or under the overhead is obtained, and accurate mark guidance is provided for a subsequent training position identification model.
After sample data in the elevated area and the area near the elevated area are determined, the embodiment of the invention can determine the characteristics of the sample data so as to train a position recognition model based on the characteristics of the sample data; in an alternative implementation, the characteristics of the sample data may include at least: an elevated road section corresponding to the sample data; after the embodiment of the invention can determine the sample data, the elevated road section corresponding to the sample can be determined. In a further optional implementation, the characteristics of the sample data may further include: and the satellite distribution characteristics corresponding to the sample data and the historical driving speed characteristics corresponding to the sample data of the elevated road of the elevated area to which the sample data is applied.
In an alternative implementation, fig. 9 shows an alternative flowchart for determining the feature of the sample data provided by the embodiment of the present invention, and the flowchart shown in fig. 9 may be regarded as a flowchart executed by the training server in the feature determination stage, and as shown in fig. 9, the flowchart may include:
and S400, determining the elevated road section corresponding to the sample data.
The embodiment of the present invention may cut the elevated road according to the preset second length to obtain a plurality of elevated road segments, where one elevated road segment may be regarded as a segment of the elevated road with the second length, and the second length may be set according to an actual situation, for example, 20 meters.
For any elevated road segment, the elevated road segment according to the embodiment of the present invention may have information such as an identifier (e.g., ID), a starting point distance of the elevated road segment, and an ending point distance of the elevated road segment; the distance between the starting point of the elevated road section and the starting point of the elevated road is the distance between the starting point of the elevated road section and the starting point of the elevated road, and the distance between the ending point of the elevated road section and the starting point of the elevated road is the distance between the ending point of the elevated road section and the starting point of the elevated road.
In optional implementation, no matter whether the road matched with the sample data is an elevated road or a non-elevated road, the sample data can have a projection point on the elevated road, wherein when the road matched with the sample data is the non-elevated road, the projection point refers to the nearest projection point of the sample data projected to the elevated road mapped by the non-elevated road; therefore, the embodiment of the present invention may determine that the sample data corresponds to the high-precision road segment based on the projection point of the sample data, specifically, the embodiment of the present invention may calculate a distance between the projection point of the sample data and the starting point position of the elevated road segment, determine the distance between the starting point distance of the elevated road segment and the ending point distance of the elevated road segment as the elevated road segment corresponding to the sample data, for example, for the closest projection point, the embodiment of the present invention may determine a distance between the closest projection point and the starting point position of the elevated road where the closest projection point is located, and determine the distance between the starting point distance of the elevated road segment and the ending point distance of the elevated road segment as the elevated road segment corresponding to the sample data.
And step S410, determining the projection distance from the sample data to the elevated road.
Whether the road matched with the sample data is an elevated road or a non-elevated road, the embodiment of the invention can determine the projection distance from the sample data to the elevated road in the high-precision area, and if the road matched with the sample data is the non-elevated road, the projection distance refers to the projection distance of the nearest projection point.
And step S420, determining satellite distribution characteristics corresponding to the sample data.
In the case of using the satellite positioning technology, the in-vehicle terminal can receive GNSS (global navigation satellite system) information of various satellites, such as GPS (global positioning system) information, BD (Beidou) information, galileo satellite information, and the like; for any satellite, the embodiment of the present invention may obtain a satellite signal-to-noise ratio (SNR) and a satellite position corresponding to the sample data based on GNSS information of the sample data corresponding to the satellite (for example, GNSS information of the satellite acquired at a historical driving time point corresponding to the sample data), so as to determine a satellite distribution characteristic of the sample data corresponding to the satellite according to at least the satellite signal-to-noise ratio and the satellite position;
it should be noted that, if the vehicle-mounted terminal acquires GNSS information of multiple satellites at the historical driving time point corresponding to the sample data, the embodiment of the present invention may determine satellite distribution characteristics corresponding to the sample data at the multiple satellites, so that the satellite distribution characteristics corresponding to the sample data determined by the embodiment of the present invention are formed by the satellite distribution characteristics corresponding to the sample data at the multiple satellites.
In an optional implementation, if the vehicle-mounted terminal acquires GNSS information of multiple satellites at a historical travel time point corresponding to sample data, for any satellite, the embodiment of the present invention may determine the satellite distribution characteristics of the sample data corresponding to the satellite directly based on the satellite signal-to-noise ratio and the satellite position of the sample data in the satellite.
In another optional implementation, if the vehicle-mounted terminal acquires GNSS information of multiple satellites at a historical driving time point corresponding to sample data, for any satellite, the embodiment of the present invention may project the sample data at the satellite position of the satellite to a preset plane having multiple grids, to obtain a projected position of the satellite position at the preset plane, so as to determine a satellite distribution characteristic of the sample data corresponding to the satellite based on a satellite signal-to-noise ratio of the sample data at the satellite and the projected position; optionally, fig. 10 shows an optional process for determining satellite distribution characteristics corresponding to sample data, which is provided in an embodiment of the present invention, and for any satellite, in the embodiment of the present invention, the satellite distribution characteristics may be determined according to the process shown in fig. 10, so that satellite distribution characteristics corresponding to sample data are formed from satellite distribution characteristics corresponding to the sample data in multiple satellites;
referring to fig. 10, the process may include:
step S500, GNSS information of the sample data corresponding to the satellite is obtained.
The GNSS information of the sample data corresponding to the satellite can be understood as: and GNSS information of the satellites collected at the historical driving time point corresponding to the sample.
Step S510, based on the satellite position in the GNSS information, projecting the satellite position to a preset plane having a plurality of grids, to obtain a projection position of the satellite position on the preset plane.
The embodiment of the invention can project the satellite position of the satellite to the preset plane with a plurality of grids so as to generate the projection position of the satellite position on the preset plane.
In an alternative implementation, the preset plane may be divided into a plurality of grids, for example, as shown in fig. 11, the preset plane is divided into 19 grids, and one grid may be a hexagonal grid, of course, the number of grids and the shape of the grid illustrated in fig. 11 are only optional, and the number and the shape of the grids in the preset plane may be set according to actual situations in the embodiment of the present invention.
Step S520, determining satellite characteristics corresponding to the sample data in each grid of a preset plane respectively at least based on the satellite signal-to-noise ratio and the projection position provided in the GNSS information, and collecting the satellite characteristics corresponding to the sample data determined in each grid to obtain the satellite distribution characteristics corresponding to the sample data.
After projecting the satellite position to a preset plane with multiple grids, the embodiment of the present invention may determine the satellite features corresponding to the sample data for each grid based on the signal-to-noise ratio and the projection position in the grids, for example, as shown in fig. 11, there are 19 grids in the preset plane, and then the embodiment of the present invention may determine the satellite features corresponding to the sample data in the 19 grids based on the signal-to-noise ratio and the projection position in the grids, so as to collect the satellite features corresponding to the sample data determined in the 19 grids, and obtain the satellite distribution features corresponding to the sample data.
In an optional implementation, the embodiment of the present invention may determine, for any grid of a preset plane, a satellite feature based on at least the satellite signal-to-noise ratio and the projection position, where the satellite feature is used to describe a signal intensity and distance distribution relationship of a satellite in the grid of the preset plane; specifically, the embodiment of the present invention may determine 3 types of satellite features of sample data in a grid to describe a signal strength and distance distribution relationship of a satellite in the grid of a preset plane, where the 3 types of satellite features may include a first type of satellite feature f1Satellite features of the second type f2And a third type of satellite features f3
For any grid, in the optional implementation of determining the first-class satellite features, the embodiment of the invention can determine first satellites with signal-to-noise ratios not being preset signal-to-noise ratios, determine the distance between the projection position of the first satellites on a preset plane and the center of the grid, determine the position features of the first satellites on the grid according to the distance at least, and accumulate the position features of the first satellites on the grid to obtain the first-class satellite features determined from the grid;
in particular, for any grid, the first type satellite features f1Can be expressed as
Figure BDA0002655031470000171
Wherein si represents the signal-to-noise ratio of the satellite, -1 is a preset signal-to-noise ratio, wiFor the position characteristics, w, of the first satellite in the grid whose signal-to-noise ratio is not the preset signal-to-noise ratioiCan be expressed as
Figure BDA0002655031470000172
Wherein d isiRepresenting the distance from the projection position of the first satellite on a preset plane to the center of the grid, wherein sigma is a preset parameter; in the embodiment of the present invention, the predetermined snr of-1 may indicate that the corresponding satellite cannot be tracked and cannot be used in positioning.
For any grid, in an optional implementation of determining the second-class satellite features, the embodiment of the present invention may determine a second satellite whose signal-to-noise ratio is a preset signal-to-noise ratio, determine a distance between a projection position of the second satellite on a preset plane and a center of the network, determine the position features of the second satellite on the grid according to at least the distance, and accumulate the position features of the second satellite on the grid to obtain the second-class satellite features determined from the grid;
in particular, a second type of satellite characteristic f2Can be expressed as
Figure BDA0002655031470000173
For any grid, in an optional implementation of determining a third type of satellite feature, the embodiment of the present invention may combine a position feature of a first satellite in the grid with a signal-to-noise ratio of the first satellite to obtain a combined feature of the first satellite in the grid, so as to accumulate the combined features of the first satellites in the grid to obtain an accumulated result, and further divide the accumulated result by a combined result of the first type of satellite feature determined from the grid and a maximum signal-to-noise ratio of the satellite to obtain the third type of satellite feature determined from the grid;
in particular, the third class of features f3Can be expressed as
Figure BDA0002655031470000174
m is expressed as the maximum signal-to-noise ratio of the satellite.
It can be understood that, in an optional implementation, for a satellite, in the embodiment of the present invention, satellite features corresponding to sample data may be respectively determined from multiple grids of a preset plane, so as to set the satellite features corresponding to the sample data determined in each grid, and obtain satellite distribution features of the sample data corresponding to the satellite; the satellite features corresponding to the sample data determined in one grid can have 3 types of satellite features, and are used for describing the signal intensity and distance distribution relation of the satellite in the grid of a preset plane; as can be seen, for a satellite, the dimensionality of the satellite distribution feature corresponding to the sample data determined in the embodiment of the present invention is: number of grids 3; for example, taking a preset plane with 19 grids as an example, for a satellite, the dimension of the satellite distribution feature of the sample data determined by the embodiment of the present invention in the satellite is 19 × 3;
further, when a plurality of satellites (for example, GPS satellites, beidou satellites, etc.) are used, the satellite distribution characteristics corresponding to the sample data are configured by the satellite distribution characteristics corresponding to the sample data in the various satellites, and therefore the dimensions of the satellite distribution characteristics corresponding to the sample data are: number of satellites grid 3; for example, taking 4 satellites are used, and the preset plane has 19 grids, the dimension of the satellite distribution feature corresponding to the sample data is: 4*19*3. Of course, only one satellite may be used in the embodiments of the present invention, and the use of multiple satellites may be considered as an alternative means of the embodiments of the present invention.
According to the embodiment of the invention, the satellite position provided in the GNSS information is projected to the preset plane based on the GNSS information of the sample data corresponding to the satellite, so that the satellite distribution characteristics corresponding to the sample data are determined on the preset plane, and the obtained satellite distribution characteristics can be more suitable for an overhead scene.
Returning to the flow shown in fig. 9, the embodiment of the present invention may execute step S430 to determine the historical driving speed characteristics corresponding to the sample data.
In an alternative implementation, fig. 12 shows an alternative process for determining the historical driving speed characteristic corresponding to the sample data, and as shown in fig. 12, the process may include:
s600, searching track points of the vehicle, and dividing the track points into unit groups by taking a set distance as a unit from the newly collected track points to form a plurality of unit groups; and the distance between the starting track point and the ending track point in one unit group is within a set distance.
In an example, taking a set distance of 30 meters as an example, after a plurality of track points of a vehicle are collected, the track points can be drawn into a unit group by taking 30 meters as a unit from the latest collected track point, so that a plurality of unit groups are formed, the distance between the initial track point and the ending track point in one unit group is within 30 meters, the newly searched track point can be considered as the vehicle position at the latest moment, and the track points can be drawn into the unit groups according to the sequence of time from back to front; for example, in fig. 13, the distance between the trace point P1 and the trace point P2 collected latest is 19 meters and within 30 meters, and the distance between the trace point P1 and the trace point P3 collected latest is 36 meters and exceeds 30 meters, then the embodiment of the present invention can divide P1 and P2 into the unit group 1, divide P3 into the unit group 2, and so on, thereby controlling the distance between the head and tail trace points in the unit group to be within 30 meters; it should be noted that the specific value of the set distance can be set according to actual conditions, and the numerical value is only exemplary.
Step S610, determining average traveling speeds corresponding to the positions in the unit group based on the historical traveling speeds of the track points of the positions in the unit group.
Alternatively, a plurality of positions may be set in one unit group, and the embodiment of the invention may determine the number of track points and the historical travel speed for each position in the unit group, thereby determining the average travel speed corresponding to each position in the unit group based on the number of track points and the historical travel speed for each position in the unit group; for example, for a position in the unit group, the sum of the historical traveling speeds of the track points of the position in the unit group is divided by the number of the track points of the position, so that the average traveling speed corresponding to the position in the unit group can be obtained.
And S620, determining the position of the sample data in the unit group, and determining the historical driving speed characteristic corresponding to the sample data according to the average driving speed corresponding to the position of the sample data in the unit group.
After the sample data is determined, the unit group where the sample is located and the position of the sample data in the unit group where the sample data is located can be determined based on the position of the sample data, so that the average traveling speed corresponding to the position of the sample data in the unit group where the sample data is located is determined as the historical traveling speed characteristic corresponding to the sample data.
Based on the flow shown in fig. 9, the embodiment of the present invention may determine 4 types of features of sample data, including: the method comprises the steps of obtaining an elevated road section corresponding to sample data, the projection distance from the sample data to the elevated road, satellite distribution characteristics corresponding to the sample data and historical driving speed characteristics corresponding to the sample data; it should be noted that, determining the 4 types of features as the features of the sample data is only an optional manner, and in the embodiment of the present invention, one or more types of features may be selected as the features of the sample data; of course, the more feature types of the sample data, the more accurate the position recognition model obtained by the subsequent training based on the features of the sample data is.
After the characteristics of the sample data are obtained, the embodiment of the invention can train a machine learning model according to the characteristics and the marks of the sample data so as to obtain a position identification model suitable for an elevated scene; optionally, fig. 14 illustrates a training execution flow provided by the embodiment of the present invention, where the flow illustrated in fig. 14 may be regarded as a flow executed by the training server in a training execution stage, and as illustrated in fig. 14, the flow may include:
and S700, inputting the characteristic marked as the sample data on the elevated into the machine learning model, acquiring the characteristic vector of the sample on the elevated road section output by the machine learning model, and inputting the characteristic marked as the sample data under the elevated into the machine learning model, and acquiring the characteristic vector of the sample under the elevated on the elevated road section output by the machine learning model.
In an alternative implementation, if the features of the sample data include an elevated road segment (represented by an identifier of the elevated road segment) corresponding to the sample, a projection distance from the sample data to the elevated road, a satellite distribution feature corresponding to the sample data, and a historical driving speed feature corresponding to the sample data, as an example, fig. 15 shows a structural example of a machine learning model, as shown in fig. 15:
for any kind of satellite, taking as an example that The satellite distribution characteristics corresponding to The satellite of The sample data include satellite characteristics determined from 19 grids of a preset plane, The satellite characteristics determined in each grid may be respectively input into a concat layer after being respectively processed by a plurality of FCBN layers with a plurality of weight biases, optionally, The FCBN layers may be full connection layers using a relu (corrected Linear Unit) activation function and a batch norm mechanism, where The batch norm is an algorithm often used in a deep network to accelerate neural network training, convergence speed and stability;
the identification of the elevated road section corresponding to the sample data is processed by an imbedding layer in a weight bias mode and then is input into a concat layer, wherein the imbedding layer can be regarded as a neural network layer capable of converting the identification into a characteristic vector of a fixed latitude;
directly inputting the projection distance from the sample data to the elevated road into a concat layer;
the historical speed characteristic corresponding to the sample data is input to a concat layer after being processed by a plurality of GRU (Gated Current Unit) layers by a plurality of weight offsets;
after the concat layer connects the input data, the connection result is processed by a plurality of FCBN layers with a plurality of weight offsets, and the processing result is imported into an FC (full connected) layer with the weight offsets to be Fully connected, so that the characteristic vector of the sample data is output.
The machine learning model illustrated in fig. 15 is implemented by a connection layer, and there is no convolutional layer or other structures, so that the machine learning model has the advantages of light weight and simple structure, and can be easily deployed at a terminal, and it is possible for the terminal to deploy a trained position recognition model after the position recognition model is trained. It should be noted that the structure of the machine learning model illustrated in fig. 15 is only optional, and machine learning models with other structures may also be used in the embodiment of the present invention.
In the embodiment of the present application, because the sample data has a condition that the marker is on an overhead or under an overhead, when the feature of the sample data is input into the machine learning model, for the marker of the sample data on the overhead, the embodiment of the present invention may obtain the feature vector of the overhead sample on the overhead road section output by the machine learning model, and for the marker of the sample data under the overhead, the embodiment of the present invention may obtain the feature vector of the overhead sample under the overhead road section output by the machine learning model;
specifically, after the elevated road is segmented into a plurality of elevated sections, each elevated section may have an elevated upper region and an elevated lower region; for any elevated road section, the embodiment of the present invention may determine sample data of the elevated road section, and specifically, on the basis of determining the elevated road section corresponding to the sample data, the embodiment of the present invention may determine the sample data of the elevated road section;
if the sample data of the elevated road section is marked on the elevated, the sample data belongs to the elevated sample of the elevated road section, and the embodiment of the invention can obtain the characteristic vector of the elevated sample of the elevated road section output by the machine learning model; if the sample data of the elevated road section is marked to be under the elevated, the sample data belongs to the elevated sample of the elevated road section, and the feature vector of the elevated sample of the elevated road section output by the machine learning model can be obtained.
Step S710, defining a loss function according to the characteristic vectors of the overhead samples and the overhead samples of the overhead road sections, and training a machine learning model according to the loss function to obtain a position recognition model suitable for the overhead scene.
For any elevated road section, after determining the feature vector of the elevated sample and the feature vector of the elevated down sample, the embodiment of the invention can define the loss function based on the feature vectors of the elevated up sample and the elevated down sample of the elevated road section, so as to train the machine learning model with the loss function and obtain the trained position recognition model.
In an alternative implementation, the loss function may be defined at least as: the distance between the overhead central characteristic vector and the overhead lower central characteristic vector of the overhead road section is maximized;
the overhead center feature vector of an overhead road section may be a mean value of feature vectors of overhead samples of the overhead road section; specifically, for any elevated road section, after the feature vector of the elevated sample of the elevated road section is obtained, the embodiment of the present invention may take the average value of the feature vector of the elevated sample of the elevated road section as the elevated center feature vector of the elevated road section;
the overhead center feature vector of an overhead section may be a mean value of feature vectors of overhead samples of the overhead section; specifically, for any elevated road section, after the feature vector of the elevated sample of the elevated road section is obtained, the embodiment of the present invention may take the average value of the feature vectors of the elevated sample of the elevated road section as the elevated central feature vector of the elevated road section;
based on this, in an optional implementation of step S710, in the embodiment of the present invention, parameters of the machine learning model may be iteratively adjusted according to the elevated upper center feature vector and the elevated lower center feature vector of the elevated road segment, so that a distance between the elevated upper center feature vector and the elevated lower center feature vector of the elevated road segment determined by the machine learning model at least based on the adjusted parameters is maximized, so as to obtain a trained position recognition model;
that is, after determining the overhead center feature vector and the overhead center feature vector of each overhead road section based on the feature vector of the sample data output by the machine learning model, the embodiment of the present invention may maximize the distance between the overhead center feature vector and the overhead center feature vector of the same overhead road section as a target, and adjust the parameter of the machine learning model so that the distance between the determined overhead center feature vector and the overhead center feature vector of the same overhead road section based on the feature vector of the sample output by the machine learning model after adjusting the parameter becomes larger, and in this way, the parameter of the machine learning model is continuously adjusted in an iterative manner so as to maximize the distance between the overhead center feature vector and the overhead center feature vector of the same overhead road section, thereby completing training and obtaining the trained position identification model.
In a more specific implementation, the loss function may be a loss function in the form of a triplet-center loss, which may be specifically defined as: the distance between the characteristic vector of the overhead sample of the overhead road section and the overhead central characteristic vector is minimized, the distance between the characteristic vector of the overhead sample and the overhead lower central characteristic vector is maximized, the distance between the characteristic vector of the overhead lower sample of the overhead road section and the overhead lower central characteristic vector is minimized, the distance between the characteristic vector of the overhead lower sample and the overhead upper central characteristic vector is maximized, and the distance between the overhead upper central characteristic vector of the overhead road section and the overhead lower central characteristic vector is maximized; based on this, in a more specific implementation of step S710, the embodiment of the present invention may iteratively adjust the parameters of the machine learning model according to the feature vector of the overhead sample and the feature vector of the under-overhead sample of the overhead link, so that the distance between the feature vector of the overhead sample of the overhead link and the overhead central feature vector determined by the machine learning model based on the adjusted parameters is minimized, the distance between the feature vector of the overhead sample and the under-overhead central feature vector is maximized, the distance between the feature vector of the under-overhead sample of the overhead link and the under-overhead central feature vector is minimized, the distance between the feature vector of the under-overhead sample and the overhead central feature vector is maximized, and the distance between the over-overhead central feature vector of the overhead link and the under-overhead central feature vector is maximized, so as to obtain the trained position identification model.
In another alternative implementation, the loss function may be defined as: the distance between the feature vectors of the overhead samples of the overhead section is minimized, the distance between the feature vectors of the under-overhead samples of the overhead section is minimized, and the distance between the feature vectors of the overhead samples of the overhead section and the feature vectors of the under-overhead samples is maximized. Based on this, in an optional implementation of step S710, the embodiment of the present invention may iteratively adjust parameters of the machine learning model according to feature vectors of the overhead samples and the overhead samples of the overhead road segment, so that a distance between feature vectors of the overhead samples of the overhead road segment determined based on the machine learning model with the adjusted parameters is minimized, a distance between feature vectors of the overhead samples of the overhead road segment is minimized, and a distance between feature vectors of the overhead samples of the overhead road segment and feature vectors of the overhead samples is maximized, so as to obtain the trained position recognition model.
It should be noted that, in the case of using the triple-center loss, the overhead center feature vector of the overhead road segment and the overhead center feature vector can be obtained by the machine learning model without additional calculation. In the case of using triplet loss, the overhead center feature vector of the overhead road section and the overhead center feature vector need to be calculated in addition to the machine learning model.
Based on the method for training the position recognition model provided by the embodiment of the invention, the embodiment of the invention can train the position recognition model capable of recognizing whether the vehicle position is on the overhead or under the overhead, so that the position recognition model can recognize whether the vehicle position is on the overhead or under the overhead based on the characteristics of the vehicle position, and a basis can be provided for accurately recognizing whether the vehicle position is on the overhead or under the overhead.
The following describes a scheme for identifying whether the vehicle is located on the overhead or under the overhead according to an embodiment of the present invention. In an alternative implementation, fig. 16 shows an alternative flow of the location identification method provided by the embodiment of the present invention, where the method may be executed by a vehicle-mounted terminal, for example, by a processor such as a CPU (central processing unit), a GPU (graphics processing unit), an NPU (neural network processor) of the vehicle-mounted terminal, and as shown in fig. 16, the flow may include:
and S800, acquiring the position of the vehicle, and determining the characteristic corresponding to the position of the vehicle.
According to the embodiment of the invention, the vehicle position can be obtained through a positioning technology, and the corresponding characteristics of the vehicle position are determined; for example, based on the vehicle location, embodiments of the present invention may determine an elevated road segment corresponding to the vehicle location, and thereby determine a feature corresponding to the vehicle location based at least on the elevated road segment.
In an alternative implementation, the features corresponding to the vehicle position may include 4 types of features, such as an elevated road segment corresponding to the vehicle position, a projected distance from the vehicle position to an elevated road, a satellite distribution feature corresponding to the vehicle position, and a historical driving speed feature corresponding to the vehicle position. For the specific determination of the above-mentioned 4 types of features, reference may be made to the description of the corresponding parts, and the detailed description is omitted here. Of course, the vehicle location determined by embodiments of the present invention may be characterized using one or more of the above-described categories 4 features.
Step S810, inputting the characteristics corresponding to the vehicle position into a position identification model in model data to obtain the characteristic vector of the vehicle position output by the position identification model.
The vehicle-mounted terminal can request the training server to download the model data so as to obtain the model data sent by the training server, the model data at least comprises the position recognition model trained based on the method for training the position recognition model provided by the embodiment of the invention, and the position recognition model can comprise a model structure of the position recognition model and trained model parameters.
In a further alternative implementation, the model data may further include: the feature vector of the center of the elevated road section, the representation information (such as the identifier and the position range of the elevated road section) of the elevated road section, etc. It should be noted that, based on the trained position recognition model, the training server may determine the overhead center feature vector through the overhead sample of the overhead road section, and determine the overhead center feature vector through the overhead sample of the overhead road section.
It should be noted that the training server may update the model data periodically, for example, when there are new or removed elevated roads in a geographic area such as a city, a country, etc., the training server may adaptively update the model data, so that when the training server updates the model data, the vehicle-mounted terminal may obtain the updated model data in a manner of full data update. The model data may be divided according to a geographic range, for example, the model data may be divided according to different cities, so that the vehicle-mounted terminal may request the training server for the model data corresponding to the geographic range when entering a geographic range, for example, the vehicle-mounted terminal may request the training server for the model data corresponding to a city when entering a city.
Optionally, the model data may be acquired by the vehicle-mounted terminal in advance.
After determining the features of the vehicle position, the embodiments of the present invention may input the features of the vehicle position into the position recognition model in the model data, and obtain the feature vector of the vehicle position output by the position recognition model. An example of the feature vector output by the location identification model is shown in the previous fig. 15, and is not described here.
And S820, obtaining the identification result of whether the vehicle position is located on the overhead or under the overhead according to the characteristic vector of the vehicle position.
Based on the feature vector of the vehicle position output by the position identification model, the embodiment of the invention can obtain the identification result of whether the vehicle position is positioned on the overhead or under the overhead.
In an optional implementation, the embodiment of the present invention may determine an elevated road segment corresponding to the vehicle position, determine an overhead distance between the feature vector of the vehicle position and an overhead center feature vector of the elevated road segment, and determine an overhead distance between the feature vector of the vehicle position and an overhead center feature vector of the elevated road segment; and obtaining the identification result of whether the vehicle position is positioned on the high frame or under the high frame according to the distance above the high frame and the distance below the high frame.
In a specific implementation, if the distance above the overhead is lower than the distance below the overhead, the distance between the feature vector of the vehicle position and the feature vector of the center above the overhead is described, and the distance between the feature vector of the vehicle position and the feature vector of the center below the overhead is similar, so that the vehicle position can be determined to be located above the overhead; if the distance under the overhead is lower than the distance above the overhead, the distance of the characteristic vector of the vehicle position from the characteristic vector of the center under the overhead is shown, and the distance from the characteristic vector of the center above the overhead is approximate, so that the vehicle position can be determined to be under the overhead.
In a more specific implementation of obtaining an identification result of whether a vehicle position is located on an overhead or under an overhead, an embodiment of the present invention may provide a confidence threshold policy, calculate an overhead confidence threshold and an overhead confidence threshold for each overhead road segment, determine a confidence of the vehicle position based on an overhead distance and an overhead distance corresponding to the vehicle position, and obtain an identification result of whether the vehicle position is located on an overhead or under an overhead by comparing the confidence with the overhead confidence threshold and the overhead confidence threshold;
in an alternative implementation, fig. 17 shows an alternative process for calculating an overhead confidence threshold and an overhead confidence threshold for an overhead road segment, which may be performed by a training server, which may store the determined overhead confidence threshold and the determined overhead confidence threshold for the overhead road segment in the model data, and with reference to fig. 17, the process may include:
and step S900, determining a verification sample from the sample data of the elevated road section.
In a stage of training the position recognition model, the training server may determine a part of samples from the sample data of the elevated road segment as verification samples (the verification samples may not participate in the training of the position recognition model).
Step S910, determining a feature vector of the verification sample output by the position identification model based on the feature of the verification sample.
For the verification sample of the elevated road section, the embodiment of the invention can determine the characteristics of the verification sample, so that the characteristics of the verification sample are input into the position identification model, and the characteristic vector of the verification sample output by the position identification model is obtained.
And step S920, determining the distances between the feature vector of the verification sample and the elevated upper center feature vector and the elevated lower center feature vector of the elevated road section respectively.
Based on the determined feature vector of the verification sample, the embodiment of the invention can calculate the distance between the feature vector of the verification sample and the overhead center feature vector of the overhead road section, and calculate the distance between the feature vector of the verification sample and the overhead center feature vector of the overhead road section.
And step S930, determining the confidence of the verification sample according to the calculated distance.
According to the distance between the feature vector of the verification sample and the overhead center feature vector of the overhead road section and the distance between the feature vector of the verification sample and the overhead center feature vector of the overhead road section, the confidence coefficient of the verification sample can be calculated; in an alternative implementation, the calculation formula may be:
Figure BDA0002655031470000261
wherein T represents the confidence coefficient that the verification sample belongs to the overhead of the overhead road section, A represents the overhead central feature vector of the overhead road section, B represents the overhead lower central feature vector of the overhead road section, and C represents the feature vector of the verification sample.
And S940, determining an overhead confidence threshold and an overhead confidence threshold of the overhead road section according to the confidence of the verification sample.
After the confidence coefficient of each verification sample of the elevated road section is determined, the elevated confidence coefficient threshold value and the elevated confidence coefficient threshold value of the elevated road section can be determined according to the confidence coefficient of each verification sample of the elevated road section.
Optionally, the above-elevation confidence threshold of the elevated road segment may be: determining a confidence level from the confidence levels of each of the verification samples that maximizes the accuracy with which the verification sample is identified on the overhead;
specifically, the confidence of each verification sample can be respectively used as a candidate confidence threshold, so that for all verification samples of the elevated road section, if the confidence of the verification sample is less than the candidate confidence threshold, the verification sample is identified as being under the elevated, and if the confidence of the verification sample is greater than or equal to the candidate confidence threshold, the verification sample is identified as being on the elevated; furthermore, embodiments of the present invention may calculate a first weighted harmonic mean f1 corresponding to the candidate confidence threshold, where the first weighted harmonic mean may represent the candidate confidence based on which the verification sample is basedAccuracy of threshold identified on the overhead, in particular, f1=(1+β2)×p1×r1/(β2×p1+r2) Wherein p is1Verification of a sample identification as correct rate on an elevated, r, representative of an elevated road section1Verification samples representing elevated road segments are identified as recall on the elevated, with β representing p1And r1The weight ratio of (a);
by taking the confidence of each verification sample of the elevated road section as the candidate confidence threshold, the embodiment of the present invention may obtain the first weighted harmonic mean f1 corresponding to each candidate confidence threshold, thereby selecting the candidate confidence threshold with the maximum f1 as the elevated confidence threshold of the elevated road section.
Optionally, the lower confidence threshold of the elevated road segment may be: determining a confidence level from the confidence levels of each verification sample that maximizes the accuracy of the verification sample identified as being elevated;
specifically, the confidence of each verification sample can be respectively used as a candidate confidence threshold, so that for all verification samples of the elevated road section, if the confidence of the verification sample is less than the candidate confidence threshold, the verification sample is identified as being under the elevated, and if the confidence of the verification sample is greater than or equal to the candidate confidence threshold, the verification sample is identified as being on the elevated; furthermore, in the embodiments of the present invention, a second weighted harmonic mean f2 corresponding to the candidate confidence threshold may be calculated, where the second weighted harmonic mean may be expressed as the accuracy of the verification of the overhead recognition of the sample based on the candidate confidence threshold, specifically f2=(1+β2)×p2×r2/(β2×p2+r2) Wherein p is2Verification sample identification as accuracy under overhead, r, on behalf of an elevated road segment2Verification samples representing elevated road segments are identified as recall under the elevated, with β representing p2And r2The weight ratio of (2).
Therefore, after the vehicle-mounted terminal determines the feature vector of the vehicle position, the vehicle-mounted terminal can calculate the overhead distance between the feature vector of the vehicle position and the overhead center feature vector of the overhead road section and the overhead distance between the feature vector of the vehicle position and the overhead center feature vector of the overhead road section, and determine the confidence coefficient of the vehicle position based on the calculated overhead distance and the calculated overhead distance; if the confidence coefficient of the vehicle position is determined to be greater than or equal to the confidence coefficient threshold value on the elevated road section and greater than or equal to the preset threshold value by the vehicle-mounted terminal, the vehicle position can be determined to be located on the elevated of the elevated road section; and if the confidence coefficient of the vehicle position determined by the vehicle-mounted terminal is less than or equal to the lower-overhead confidence coefficient threshold value and less than the preset threshold value, determining that the vehicle position is positioned under the overhead of the overhead road section. The value of the preset threshold may be set according to actual conditions, for example, 0.5, and the embodiment of the present invention is not limited.
The position recognition method provided by the embodiment of the invention can accurately recognize whether the vehicle position is on the overhead or under the overhead based on the position recognition model trained by the training server.
Based on the recognition result that the vehicle position recognized by the vehicle-mounted terminal is located on the overhead or under the overhead, the vehicle-mounted terminal can request a navigation route from the navigation server to obtain the navigation route at least according to the recognition result, so that the navigation service is provided for the vehicle based on the recognition result that the vehicle position is located on the overhead or under the overhead; and the scene requesting the navigation service includes a starting point road-catching scene, a yaw identification scene and the like.
In an optional implementation, fig. 18 shows a navigation interaction flow in a starting point road-catching scene, as shown in fig. 18, where the starting point road-catching refers to binding a starting point position of a route planning request initiated by a user to an actual road through acquired user identification information, and the flow may include:
step S10, the in-vehicle terminal determines the vehicle position.
Alternatively, the vehicle-mounted terminal may determine the vehicle position through satellite positioning technology, and the vehicle position may be the current position of the vehicle.
And step S11, the vehicle-mounted terminal determines an elevated road section corresponding to the vehicle position, and determines the characteristics of the vehicle position at least based on the elevated road section.
In step S12, the in-vehicle terminal determines the result of recognition of whether the vehicle position is on-overhead or off-overhead based on the characteristics of the vehicle position.
The implementation manner of the recognition result of the vehicle-mounted terminal determining whether the vehicle position is located on the overhead or under the overhead based on the characteristics of the vehicle position may refer to the description of the corresponding parts, and is not described herein again.
And step S13, the vehicle-mounted terminal sends a navigation request to the navigation server according to the recognition result and the vehicle position.
And step S14, the navigation server determines a navigation starting point road corresponding to the vehicle position according to the identification result and the vehicle position.
And step S15, the navigation server sends the navigation route containing the navigation starting point road to the vehicle-mounted terminal.
It should be noted that, if the identification result provided by the vehicle-mounted terminal is that the vehicle position is located on the overhead, the navigation route may determine the navigation starting point road corresponding to the vehicle position based on the overhead road section corresponding to the vehicle position, the identification result of the vehicle position on the overhead of the corresponding overhead road section, and the vehicle position, so that the vehicle may be guided to travel from the navigation starting point road to the overhead; and if the vehicle position is under the overhead according to the identification result provided by the vehicle-mounted terminal, the navigation route can guide the vehicle to run from the navigation starting road to the overhead. It should be noted that one or more auxiliary roads generally exist under the elevated of the elevated road, and in a starting point road-grabbing scene, if the vehicle-mounted terminal identifies that the vehicle position is under the elevated, the auxiliary road closest to the elevated position may be selected to be added to the navigation route in the embodiment of the present invention.
The yaw recognition scenario refers to a scenario in which a driving route of the vehicle deviates from a navigation route, and in an alternative implementation, fig. 19 shows a navigation interaction flow in the yaw recognition scenario, and as shown in fig. 19, the flow may include:
and step S20, the vehicle-mounted terminal acquires the current navigation route.
Alternatively, the in-vehicle terminal may request a current navigation route between the driving start point and the end point to the navigation server.
Step S21, if the consecutive plurality of vehicle positions are on-overhead or off-overhead recognition results are different from the on-overhead or off-overhead recognition results of the current navigation route, the in-vehicle terminal determines that the vehicle is yawing.
The navigation route can have an identification result for identifying whether the vehicle position is on the overhead or under the overhead, and after the position identification method provided by the embodiment of the invention identifies whether the continuous multiple vehicle positions are on the overhead or under the overhead, if the identification result is different from the identification result of the navigation route, the embodiment of the invention can determine that the vehicle deviates from the navigation route; for example, if 10 consecutive vehicle positions are identified based on the position identification method provided by the embodiment of the present invention, the 10 vehicle positions are located under the overhead, and the navigation route identifies the 10 vehicle positions located on the overhead, then the embodiment of the present invention may determine the yaw of the vehicle.
And step S22, the vehicle-mounted terminal requests the navigation server to re-plan the navigation route according to the current vehicle position and the identification result of whether the current vehicle position is on the overhead or under the overhead.
And step S23, the navigation server replans the navigation route according to the current vehicle position and the identification result of whether the current vehicle position is on the overhead or under the overhead.
The navigation server can replan navigation route from the current vehicle position to the navigation terminal based on the current vehicle position, the elevated road section corresponding to the current vehicle position and the identification result of whether the current vehicle position is located on the elevated of the corresponding elevated road section; it is understood that if the current vehicle position is on the overhead, the re-planned navigation route may be routed from the current vehicle position to the navigation terminal via the overhead road, and if the current vehicle position is off the overhead, the re-planned navigation route may be routed from the current vehicle position to the navigation terminal via the off-overhead road.
And step S24, the navigation server sends the re-planned navigation route to the vehicle-mounted terminal.
It should be noted that the navigation route request in the starting point road-catching scene and the navigation route request in the yaw recognition scene are only an optional example of the navigation route request based on the recognition result determined by the position recognition method in the embodiment of the present invention, and the navigation route request may also be based on the recognition result determined by the position recognition method in other possible navigation scenes, and the embodiment of the present invention is not limited thereto.
While various embodiments of the present invention have been described above, various alternatives described in the various embodiments can be combined and cross-referenced without conflict to extend the variety of possible embodiments that can be considered disclosed and disclosed in connection with the embodiments of the present invention.
The following describes an apparatus for training a position recognition model according to an embodiment of the present invention, and the apparatus for training a position recognition model described below may be considered as a functional module that is required to be configured by a training server to implement the method for training a position recognition model according to an embodiment of the present invention. The contents of the device for training the position recognition model described below may be referred to in correspondence with the contents of the method described above.
In an alternative implementation, fig. 20 is a block diagram illustrating an apparatus for training a location recognition model according to an embodiment of the present invention, and referring to fig. 20, the apparatus may include:
a sample data acquisition module 100, configured to acquire vehicle position data of an elevated area and an area near the elevated area as sample data, where the sample data is marked as being located on the elevated or located under the elevated;
a feature determination module 110, configured to determine a feature of the sample data;
and a training execution module 120, configured to train a machine learning model by using the features of the sample data and the labels of the sample data as training data to obtain a position identification model applicable to an elevated scene.
Optionally, the characteristic determining module 110 is configured to determine the characteristic of the sample data, and includes:
and determining an elevated road section corresponding to the sample data, wherein the elevated road in the elevated area is segmented into more than two elevated road sections in advance.
Optionally, the feature determining module 110 is configured to determine that the elevated road segment corresponding to the sample data includes:
determining a road matched with the sample data;
if the matched road is an elevated road, determining the elevated road section corresponding to the sample data on the elevated road;
if the matched road is a non-elevated road, projecting the sample data onto the elevated road mapped by the non-elevated road to obtain at least one projection point of the sample data, determining the nearest projection point closest to the projection distance of the sample data from the at least one projection point, and determining the elevated road section where the nearest projection point is located as the elevated road section corresponding to the sample data.
Optionally, a section of elevated road segment has an elevated road segment starting point distance and an elevated road segment end point distance, where the elevated road segment starting point distance is a distance between a starting point position of the elevated road segment and a starting point position of the elevated road, and the elevated road segment end point distance is a distance between an end point position of the elevated road segment and a starting point position of the elevated road;
the feature determining module 110, configured to determine the elevated road segment where the closest projection point is located as the elevated road segment corresponding to the sample data, includes:
and determining the distance between the nearest projection point and the starting point position of the elevated road where the nearest projection point is located, and determining the distance to be the elevated road section between the starting point distance of the elevated road section and the end point distance of the elevated road section as the elevated road section corresponding to the sample data.
Optionally, the elevated road segment corresponding to the sample data is represented by a road segment identifier of the elevated road segment.
Optionally, fig. 21 is another block diagram of an apparatus for training a location recognition model according to an embodiment of the present invention, and with reference to fig. 20 and 21, the apparatus may further include:
an area determination module 130, configured to cut an elevated road and a general road around the elevated road into a plurality of road segments, where the road segments include an elevated road segment and a non-elevated road segment; aiming at any elevated road segment, forming a road segment group by the elevated road segment and non-elevated road segments related around the elevated road segment; determining a map basic area corresponding to the road segment group; and carrying out area merging processing based on map basic area iteration until the area utilization rate of the map basic area in the merged area is maximized, and taking the merged area as an elevated area and an elevated nearby area.
Optionally, the area determining module 130 is configured to, for any one of the elevated road segments, form a road segment group with the elevated road segment and associated non-elevated road segments around the elevated road segment, and includes:
and aiming at any elevated road segment, determining a non-elevated road segment which is matched with the advancing direction of the elevated road segment and has a distance within a preset distance as a non-elevated road segment associated around the elevated road segment, and forming a road segment group corresponding to the elevated road segment by the elevated road segment and the non-elevated road segment associated around the elevated road segment.
Optionally, the area determining module 130 is configured to determine a map basic area corresponding to the road segment group, and includes:
and determining the minimum external rectangle of the road segment group, and determining the map basic area corresponding to the road segment group based on the minimum external rectangle of the road segment group.
Optionally, the characteristic determining module 110 is configured to determine the characteristic of the sample data, and further includes:
determining the projection distance from the sample data to an elevated road in an elevated area;
determining satellite distribution characteristics corresponding to the sample data;
and determining the historical driving speed characteristics corresponding to the sample data.
Optionally, the feature determining module 110 is configured to determine the satellite distribution feature corresponding to the sample data, and includes:
acquiring global navigation satellite system information acquired at a historical driving time point corresponding to the sample data;
and acquiring a satellite signal-to-noise ratio and a satellite position corresponding to the sample data based on the global navigation satellite system information, and determining a satellite distribution characteristic corresponding to the sample data at least according to the satellite signal-to-noise ratio and the satellite position.
Optionally, the feature determining module 110 is configured to determine, according to at least the satellite signal-to-noise ratio and the satellite position, that the satellite distribution feature corresponding to the sample data includes:
projecting the satellite position to a preset plane with a plurality of grids to obtain the projection position of the satellite position on the preset plane;
and respectively determining satellite features corresponding to the sample data in each grid of the preset plane at least based on the satellite signal-to-noise ratio and the projection position, and collecting the satellite features corresponding to the sample data determined in each grid to obtain satellite distribution features corresponding to the sample.
Optionally, the feature determining module 110 is configured to determine, based on at least the satellite signal-to-noise ratio and the projection position, satellite features corresponding to the sample data in each grid of the preset plane respectively, and includes:
and determining satellite characteristics aiming at any grid of the preset plane at least based on the satellite signal-to-noise ratio and the projection position, wherein the satellite characteristics are used for describing the signal intensity and distance distribution relation of the satellite in the grid of the preset plane.
Optionally, the feature determining module 110 is configured to determine, for any grid of the preset plane, a satellite feature based on at least the satellite signal-to-noise ratio and the projection position, where the satellite feature is used to describe a signal strength and a distance distribution relationship of a satellite in the grid of the preset plane, and includes:
aiming at any grid of the preset plane, determining the distance between the projection position of the first satellite of which the signal-to-noise ratio is not the preset signal-to-noise ratio and the center of the grid, determining the position characteristics of the first satellite in the grid at least according to the distance, and accumulating the position characteristics of each first satellite in the grid to obtain the first satellite characteristics determined from the grid;
for any grid of the preset plane, determining the distance between the projection position of a second satellite with a preset signal-to-noise ratio on the preset plane and the center of the network, determining the position characteristics of the second satellite on the grid at least according to the distance, and accumulating the position characteristics of each second satellite on the grid to obtain the second satellite characteristics determined from the grid;
and aiming at any grid of the preset plane, combining the position characteristic of the first satellite in the grid with the signal-to-noise ratio of the first satellite to obtain the combination characteristic of the first satellite in the grid, accumulating the combination characteristics of the first satellites in the grid to obtain an accumulation result, and dividing the accumulation result by the combination result of the first satellite characteristic determined from the grid and the maximum signal-to-noise ratio of the satellite to obtain a third satellite characteristic determined from the grid.
Optionally, the characteristic determining module 110 is configured to determine the historical driving speed characteristic corresponding to the sample data, and includes:
collecting track points of the vehicle, and dividing the track points into unit groups by taking a set distance as a unit from the newly collected track points to form a plurality of unit groups; the distance between the starting track point and the ending track point in one unit group is within a set distance;
determining average running speeds corresponding to all the positions in the unit group based on the historical running speeds of the track points of all the positions in the unit group;
and determining the position of the sample data in the unit group, and determining the historical driving speed characteristic corresponding to the sample data according to the average driving speed corresponding to the position of the sample data in the unit group.
Optionally, the training executing module 120 is configured to train the machine learning model by using the features of the sample data and the labels of the sample data as training data, so as to obtain the position identification model applicable to the overhead scene, and includes:
inputting the characteristic marked as the sample data on the elevated into a machine learning model, acquiring the characteristic vector of the sample on the elevated road section output by the machine learning model, and inputting the characteristic marked as the sample data under the elevated into the machine learning model, and acquiring the characteristic vector of the sample under the elevated on the elevated road section output by the machine learning model;
and defining a loss function according to the characteristic vectors of the overhead samples and the overhead samples of the overhead road section, and training a machine learning model according to the loss function to obtain a position identification model suitable for the overhead scene.
Optionally, in an aspect, the loss function at least includes: the distance between the overhead central characteristic vector and the overhead lower central characteristic vector of the overhead road section is maximized; the overhead center feature vector of the overhead road section is the mean value of the feature vectors of the overhead samples of the overhead road section, and the overhead center feature vector of the overhead road section is the mean value of the feature vectors of the overhead samples of the overhead road section.
In a further optional implementation, the loss function further includes: the distance between the feature vector of the overhead sample of the overhead road section and the overhead center feature vector is minimized, the distance between the feature vector of the overhead sample and the overhead lower center feature vector is maximized, the distance between the feature vector of the overhead lower sample of the overhead road section and the overhead lower center feature vector is minimized, and the distance between the feature vector of the overhead lower sample and the overhead upper center feature vector is maximized.
In another aspect, the loss function comprises: the distance between the feature vectors of the overhead samples of the overhead section is minimized, the distance between the feature vectors of the under-overhead samples of the overhead section is minimized, and the distance between the feature vectors of the overhead samples of the overhead section and the feature vectors of the under-overhead samples is maximized.
The device for training the position recognition model can train the position recognition model capable of recognizing whether the position of the vehicle is on the overhead or under the overhead, so that the position recognition model can be used for accurately recognizing whether the position of the vehicle is on the overhead or under the overhead based on the characteristics of the position of the vehicle.
The embodiment of the invention also provides a training server which can load the device for training the position recognition model in the form of a program so as to realize the method for training the position recognition model provided by the embodiment of the invention. In an alternative implementation, fig. 22 shows an alternative block diagram of a training server provided in an embodiment of the present invention, and as shown in fig. 22, the training server may include: at least one processor 1, at least one communication interface 2, at least one memory 3 and at least one communication bus 4;
in the embodiment of the present invention, the number of the processor 1, the communication interface 2, the memory 3, and the communication bus 4 is at least one, and the processor 1, the communication interface 2, and the memory 3 complete mutual communication through the communication bus 4;
optionally, the communication interface 2 may be an interface of a communication module for performing network communication;
alternatively, the processor 1 may be a CPU (central Processing Unit), a GPU (Graphics Processing Unit), an NPU (embedded neural network processor), an FPGA (Field Programmable Gate Array), a TPU (tensor Processing Unit), an AI chip, an asic (application Specific Integrated circuit), or one or more Integrated circuits configured to implement the embodiments of the present invention, etc.;
the memory 3 may comprise a high-speed RAM memory, and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory;
the memory 3 stores one or more computer-executable instructions, and the processor 1 calls the one or more computer-executable instructions to execute the method for training the position recognition model provided by the embodiment of the invention.
Embodiments of the present invention also provide a storage medium, where the storage medium may store one or more computer-executable instructions for executing the method for training a position recognition model according to the embodiments of the present invention.
In the following, the position identification apparatus provided in the embodiment of the present invention is introduced, and the position identification apparatus described below may be considered as a functional module that is required by the vehicle-mounted terminal to implement the position identification method provided in the embodiment of the present invention. The contents of the position recognition device described below may be referred to in correspondence with the contents of the method described above.
In an alternative implementation, fig. 23 shows a block diagram of a location identification apparatus provided in an embodiment of the present invention, and as shown in fig. 23, the apparatus may include:
the vehicle position feature determination module 200 is configured to obtain a vehicle position and determine a feature corresponding to the vehicle position;
a feature vector determination module 210, configured to input a feature corresponding to the vehicle position into a position identification model in model data, so as to obtain a feature vector of the vehicle position output by the position identification model;
a recognition result obtaining module 220, configured to obtain, according to the feature vector of the vehicle position, a recognition result of whether the vehicle position is located on an overhead or under an overhead.
Optionally, the model data further includes: an overhead center eigenvector and an overhead center eigenvector of the overhead section; the identification result obtaining module 220 is configured to obtain an identification result of whether the vehicle position is located on an overhead or under an overhead according to the feature vector of the vehicle position, and includes:
determining the overhead distance between the characteristic vector of the vehicle position and the overhead central characteristic vector of the overhead road section, and determining the overhead distance between the characteristic vector of the vehicle position and the overhead central characteristic vector of the overhead road section;
and obtaining the identification result of whether the vehicle position is positioned on the high frame or under the high frame according to the distance above the high frame and the distance below the high frame.
Optionally, the identification result obtaining module 220 is configured to obtain an identification result of whether the vehicle position is located on the overhead or under the overhead according to the overhead distance and the overhead distance, and includes:
if the distance above the elevated frame is lower than the distance below the elevated frame, obtaining the recognition result that the vehicle position is on the elevated frame; and if the distance below the elevated frame is lower than the distance above the elevated frame, obtaining the recognition result that the vehicle position is positioned on the elevated frame.
Optionally, fig. 24 shows another block diagram of the location identification apparatus according to the embodiment of the present invention, and in combination with fig. 23 and fig. 24, the apparatus may further include:
a confidence determination module 230, configured to determine a confidence of the vehicle position according to the distance above the overhead and the distance below the overhead.
In a further specific implementation, the identification result obtaining module 220 is configured to obtain the identification result that the vehicle position is located on the overhead if the overhead distance is lower than the overhead distance, and includes:
if the confidence coefficient of the vehicle position is greater than or equal to an overhead confidence coefficient threshold value of a preset overhead road section and is greater than or equal to a preset threshold value, obtaining an identification result of the vehicle position on the overhead;
a recognition result obtaining module 220, configured to obtain a recognition result that the vehicle position is located on the overhead if the overhead distance is lower than the overhead distance, including:
and if the confidence coefficient of the vehicle position is smaller than a preset under-elevated confidence coefficient threshold value of the elevated road section and smaller than a preset threshold value, obtaining an identification result that the vehicle position is under the elevated.
Alternatively, fig. 25 shows another block diagram of the location identification apparatus provided in the embodiment of the present invention, and in combination with fig. 23 and fig. 25, the apparatus may include:
and a navigation route obtaining module 240, configured to obtain a navigation route at least according to the identification result.
In an optional implementation, the navigation route obtaining module 240 is configured to obtain a navigation route according to at least the identification result, and includes:
sending a navigation request to a navigation server according to the identification result and the vehicle position;
and acquiring a navigation route sent by the navigation server, wherein the navigation route comprises a navigation starting point road, and the navigation starting point road is determined according to the identification result and the vehicle position.
In another alternative implementation, the navigation route obtaining module 240 is configured to obtain the navigation route at least according to the identification result, and includes:
determining that yaw occurs if the recognition result of the consecutive plurality of vehicle positions being on or under the overhead is different from the recognition result of the current navigation route indicating whether the consecutive plurality of vehicle positions are on or under the overhead;
requesting a navigation server to re-plan a navigation route according to the current vehicle position and the identification result of whether the current vehicle position is on an overhead or under the overhead;
and acquiring the re-planned navigation route sent by the navigation server.
The position identification method provided by the embodiment of the invention can accurately identify whether the position of the vehicle is positioned on the overhead or under the overhead, thereby providing possibility for providing accurate navigation service for the vehicle in an overhead scene.
The embodiment of the invention also provides a vehicle-mounted terminal, and the vehicle-mounted terminal can be loaded with the position identification device in a program form so as to realize the position identification method provided by the embodiment of the invention. An alternative block diagram of the vehicle-mounted terminal, which may be combined with that shown in fig. 22, includes at least one memory and at least one processor, where the memory stores one or more computer-executable instructions, and the processor calls the one or more computer-executable instructions to execute the location identification method provided in the embodiment of the present invention.
The embodiment of the invention also provides a storage medium, which can store one or more computer-executable instructions for executing the position identification method provided by the embodiment of the invention.
Although the embodiments of the present invention have been disclosed, the present invention is not limited thereto. Various changes and modifications may be effected therein by one skilled in the art without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (30)

1. A method of training a position recognition model, comprising:
acquiring vehicle position data of an elevated area and an area near the elevated area as sample data, wherein the sample data is marked to be positioned on the elevated or positioned under the elevated;
determining characteristics of the sample data;
and training a machine learning model by taking the characteristics of the sample data and the marks of the sample data as training data to obtain a position identification model suitable for the overhead scene.
2. The method of training a position recognition model according to claim 1, wherein said determining features of said sample data comprises:
and determining an elevated road section corresponding to the sample data, wherein the elevated road in the elevated area is segmented into more than two elevated road sections in advance.
3. The method of training a position recognition model according to claim 2, wherein said determining an elevated road segment to which the sample data corresponds comprises:
determining a road matched with the sample data;
if the matched road is an elevated road, determining the elevated road section corresponding to the sample data on the elevated road;
if the matched road is a non-elevated road, projecting the sample data onto the elevated road mapped by the non-elevated road to obtain at least one projection point of the sample data, determining the nearest projection point closest to the projection distance of the sample data from the at least one projection point, and determining the elevated road section where the nearest projection point is located as the elevated road section corresponding to the sample data.
4. The method of training a position recognition model according to claim 3, wherein a segment of an elevated road segment has an elevated road segment start point distance and an elevated road segment end point distance, the elevated road segment start point distance being a distance between a start point position of the elevated road segment and a start point position of the elevated road, the elevated road segment end point distance being a distance between an end point position of the elevated road segment and a start point position of the elevated road;
the determining the elevated road segment where the nearest projection point is located as the elevated road segment corresponding to the sample data includes:
and determining the distance between the nearest projection point and the starting point position of the elevated road where the nearest projection point is located, and determining the distance to be the elevated road section between the starting point distance of the elevated road section and the end point distance of the elevated road section as the elevated road section corresponding to the sample data.
5. The method of training a position recognition model according to claim 2, wherein the elevated road segment to which the sample data corresponds is represented by a road segment identifier of the elevated road segment.
6. The method of training a position recognition model according to any of claims 2-5, wherein said determining features of said sample data further comprises:
determining the projection distance from the sample data to an elevated road in an elevated area;
determining satellite distribution characteristics corresponding to the sample data;
and determining the historical driving speed characteristics corresponding to the sample data.
7. The method of training a position recognition model according to claim 6, wherein said determining satellite distribution characteristics to which said sample data corresponds comprises:
acquiring global navigation satellite system information acquired at a historical driving time point corresponding to the sample data;
and acquiring a satellite signal-to-noise ratio and a satellite position corresponding to the sample data based on the global navigation satellite system information, and determining a satellite distribution characteristic corresponding to the sample data at least according to the satellite signal-to-noise ratio and the satellite position.
8. The method for training the position recognition model according to claim 7, wherein the determining the satellite distribution characteristics corresponding to the sample data according to at least the satellite signal-to-noise ratio and the satellite position comprises:
projecting the satellite position to a preset plane with a plurality of grids to obtain the projection position of the satellite position on the preset plane;
and respectively determining satellite features corresponding to the sample data in each grid of the preset plane at least based on the satellite signal-to-noise ratio and the projection position, and collecting the satellite features corresponding to the sample data determined in each grid to obtain satellite distribution features corresponding to the sample.
9. The method for training the position recognition model according to claim 8, wherein the determining, in each grid of the preset plane, the satellite features corresponding to the sample data based on at least the satellite signal-to-noise ratio and the projected position respectively comprises:
and determining satellite characteristics aiming at any grid of the preset plane at least based on the satellite signal-to-noise ratio and the projection position, wherein the satellite characteristics are used for describing the signal intensity and distance distribution relation of the satellite in the grid of the preset plane.
10. The method for training the position recognition model according to claim 9, wherein the determining, for any grid of the preset plane, a satellite feature based on at least the satellite signal-to-noise ratio and the projected position, the satellite feature being used for describing a signal strength and distance distribution relation of a satellite in the grid of the preset plane comprises:
aiming at any grid of the preset plane, determining the distance between the projection position of the first satellite of which the signal-to-noise ratio is not the preset signal-to-noise ratio and the center of the grid, determining the position characteristics of the first satellite in the grid at least according to the distance, and accumulating the position characteristics of each first satellite in the grid to obtain the first satellite characteristics determined from the grid;
for any grid of the preset plane, determining the distance between the projection position of a second satellite with a preset signal-to-noise ratio on the preset plane and the center of the network, determining the position characteristics of the second satellite on the grid at least according to the distance, and accumulating the position characteristics of each second satellite on the grid to obtain the second satellite characteristics determined from the grid;
and aiming at any grid of the preset plane, combining the position characteristic of the first satellite in the grid with the signal-to-noise ratio of the first satellite to obtain the combination characteristic of the first satellite in the grid, accumulating the combination characteristics of the first satellites in the grid to obtain an accumulation result, and dividing the accumulation result by the combination result of the first satellite characteristic determined from the grid and the maximum signal-to-noise ratio of the satellite to obtain a third satellite characteristic determined from the grid.
11. The method of training a position recognition model according to claim 6, wherein said determining historical travel speed characteristics to which the sample data corresponds comprises:
collecting track points of the vehicle, and dividing the track points into unit groups by taking a set distance as a unit from the newly collected track points to form a plurality of unit groups; the distance between the starting track point and the ending track point in one unit group is within a set distance;
determining average running speeds corresponding to all the positions in the unit group based on the historical running speeds of the track points of all the positions in the unit group;
and determining the position of the sample data in the unit group, and determining the historical driving speed characteristic corresponding to the sample data according to the average driving speed corresponding to the position of the sample data in the unit group.
12. The method for training the position recognition model according to claim 1, wherein the training the machine learning model by using the features of the sample data and the marks of the sample data as training data to obtain the position recognition model suitable for the overhead scene comprises:
inputting the characteristic marked as the sample data on the elevated into a machine learning model, acquiring the characteristic vector of the sample on the elevated road section output by the machine learning model, and inputting the characteristic marked as the sample data under the elevated into the machine learning model, and acquiring the characteristic vector of the sample under the elevated on the elevated road section output by the machine learning model;
and defining a loss function according to the characteristic vectors of the overhead samples and the overhead samples of the overhead road section, and training a machine learning model according to the loss function to obtain a position identification model suitable for the overhead scene.
13. The method of training a position recognition model of claim 12, wherein the loss function comprises at least: the distance between the overhead central characteristic vector and the overhead lower central characteristic vector of the overhead road section is maximized; the overhead center feature vector of the overhead section is the mean value of the feature vectors of the overhead samples of the overhead section, and the overhead center feature vector of the overhead section is the mean value of the feature vectors of the overhead samples of the overhead section.
14. The method of training a position recognition model of claim 13, wherein the loss function further comprises: the distance between the feature vector of the overhead sample of the overhead road section and the overhead center feature vector is minimized, the distance between the feature vector of the overhead sample and the overhead lower center feature vector is maximized, the distance between the feature vector of the overhead lower sample of the overhead road section and the overhead lower center feature vector is minimized, and the distance between the feature vector of the overhead lower sample and the overhead upper center feature vector is maximized.
15. The method of training a position recognition model of claim 12, wherein the loss function comprises: the distance between the feature vectors of the overhead samples of the overhead section is minimized, the distance between the feature vectors of the under-overhead samples of the overhead section is minimized, and the distance between the feature vectors of the overhead samples of the overhead section and the feature vectors of the under-overhead samples is maximized.
16. The method of training a position recognition model of claim 1, further comprising:
cutting an elevated road and a common road around the elevated road into a plurality of road segments, wherein the road segments comprise elevated road segments and non-elevated road segments;
aiming at any elevated road segment, forming a road segment group by the elevated road segment and non-elevated road segments related around the elevated road segment;
determining a map basic area corresponding to the road segment group;
and carrying out area merging processing based on map basic area iteration until the area utilization rate of the map basic area in the merged area is maximized, and taking the merged area as an elevated area and an elevated nearby area.
17. The method of training a position recognition model according to claim 16, wherein said forming a road segment group with an elevated road segment and associated non-elevated road segments around the elevated road segment for any one elevated road segment comprises:
and aiming at any elevated road segment, determining a non-elevated road segment which is matched with the advancing direction of the elevated road segment and has a distance within a preset distance as a non-elevated road segment associated around the elevated road segment, and forming a road segment group corresponding to the elevated road segment by the elevated road segment and the non-elevated road segment associated around the elevated road segment.
18. The method of training a position recognition model according to claim 16 or 17, wherein the determining a map base area corresponding to a set of road segments comprises:
and determining the minimum external rectangle of the road segment group, and determining the map basic area corresponding to the road segment group based on the minimum external rectangle of the road segment group.
19. A position recognition method, wherein whether a vehicle position is located on an overhead or under an overhead is recognized based on a position recognition model trained by the method of any one of claims 1 to 18, the position recognition method comprising:
acquiring a vehicle position, and determining a characteristic corresponding to the vehicle position;
inputting the characteristics corresponding to the vehicle position into a position identification model in model data to obtain a characteristic vector of the vehicle position output by the position identification model;
and obtaining the identification result of whether the vehicle position is positioned on the overhead or under the overhead according to the characteristic vector of the vehicle position.
20. The location identification method of claim 19, wherein the model data further comprises: an overhead center eigenvector and an overhead center eigenvector of the overhead section; the obtaining of the recognition result of whether the vehicle position is located on the overhead or under the overhead according to the feature vector of the vehicle position includes:
determining the overhead distance between the characteristic vector of the vehicle position and the overhead central characteristic vector of the overhead road section, and determining the overhead distance between the characteristic vector of the vehicle position and the overhead central characteristic vector of the overhead road section;
and obtaining the identification result of whether the vehicle position is positioned on the high frame or under the high frame according to the distance above the high frame and the distance below the high frame.
21. The position recognition method according to claim 20, wherein the obtaining of the recognition result of whether the vehicle position is located on the overhead or under the overhead from the above-overhead distance and the below-overhead distance includes:
if the distance above the elevated frame is lower than the distance below the elevated frame, obtaining the recognition result that the vehicle position is on the elevated frame; and if the distance below the elevated frame is lower than the distance above the elevated frame, obtaining the recognition result that the vehicle position is positioned on the elevated frame.
22. The position recognition method according to claim 21, wherein the obtaining of the recognition result that the vehicle position is on the overhead if the overhead distance is lower than the under-overhead distance includes:
determining the confidence coefficient of the vehicle position according to the distance above the elevated frame and the distance below the elevated frame;
if the confidence coefficient of the vehicle position is greater than or equal to an overhead confidence coefficient threshold value of a preset overhead road section and is greater than or equal to a preset threshold value, obtaining an identification result of the vehicle position on the overhead;
the obtaining of the recognition result that the vehicle position is located on the overhead if the overhead distance is lower than the overhead distance includes:
and if the confidence coefficient of the vehicle position is smaller than a preset under-elevated confidence coefficient threshold value of the elevated road section and smaller than a preset threshold value, obtaining an identification result that the vehicle position is under the elevated.
23. The location identification method according to any one of claims 19-22, further comprising:
and acquiring a navigation route at least according to the identification result.
24. The location identification method of claim 23, wherein the obtaining of the navigation route according to at least the identification result comprises:
sending a navigation request to a navigation server according to the identification result and the vehicle position;
and acquiring a navigation route sent by the navigation server, wherein the navigation route comprises a navigation starting point road, and the navigation starting point road is determined according to the identification result and the vehicle position.
25. The location identification method of claim 23, wherein the obtaining of the navigation route according to at least the identification result comprises:
determining that yaw occurs if the recognition result of the consecutive plurality of vehicle positions being on or under the overhead is different from the recognition result of the current navigation route indicating whether the consecutive plurality of vehicle positions are on or under the overhead;
requesting a navigation server to re-plan a navigation route according to the current vehicle position and the identification result of whether the current vehicle position is on an overhead or under the overhead;
and acquiring the re-planned navigation route sent by the navigation server.
26. An apparatus for training a position recognition model, comprising:
the system comprises a sample data acquisition module, a data processing module and a data processing module, wherein the sample data acquisition module is used for acquiring vehicle position data of an elevated area and an area near the elevated area as sample data, and the sample data is marked to be positioned on the elevated or positioned under the elevated;
a characteristic determination module for determining characteristics of the sample data;
and the training execution module is used for training the machine learning model by taking the characteristics of the sample data and the marks of the sample data as training data so as to obtain the position recognition model suitable for the overhead scene.
27. A training server, comprising: at least one memory storing one or more computer-executable instructions and at least one processor invoking the one or more computer-executable instructions to perform the method of training the position recognition model of any of claims 1-18.
28. A position recognition apparatus, comprising:
the vehicle position characteristic determining module is used for acquiring a vehicle position and determining a characteristic corresponding to the vehicle position;
the characteristic vector determining module is used for inputting the characteristics corresponding to the vehicle position into a position identification model in model data to obtain the characteristic vector of the vehicle position output by the position identification model;
and the identification result obtaining module is used for obtaining the identification result of whether the vehicle position is positioned on the high frame or under the high frame according to the characteristic vector of the vehicle position.
29. An in-vehicle terminal, comprising: at least one memory storing one or more computer-executable instructions and at least one processor invoking the one or more computer-executable instructions to perform the location identification method of any of claims 19-25.
30. A storage medium, wherein the storage medium stores one or more computer-executable instructions for performing the method of training a position recognition model according to any one of claims 1-18, or performing the position recognition method according to any one of claims 19-25.
CN202010884127.XA 2020-08-28 2020-08-28 Method for training position recognition model, position recognition method and related equipment Pending CN114199262A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010884127.XA CN114199262A (en) 2020-08-28 2020-08-28 Method for training position recognition model, position recognition method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010884127.XA CN114199262A (en) 2020-08-28 2020-08-28 Method for training position recognition model, position recognition method and related equipment

Publications (1)

Publication Number Publication Date
CN114199262A true CN114199262A (en) 2022-03-18

Family

ID=80644155

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010884127.XA Pending CN114199262A (en) 2020-08-28 2020-08-28 Method for training position recognition model, position recognition method and related equipment

Country Status (1)

Country Link
CN (1) CN114199262A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116068604A (en) * 2023-02-13 2023-05-05 腾讯科技(深圳)有限公司 Fusion positioning method, fusion positioning device, computer equipment, storage medium and program product

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02137100A (en) * 1988-11-18 1990-05-25 Nissan Motor Co Ltd Elevated express-highway travelling recognition device
DE4103378A1 (en) * 1990-02-16 1991-08-22 Volkswagen Ag Local traffic system - utilises suspension of road vehicles with catches on roofs of vehicles and provision for taking traction current from batteries under seats
JP2011053024A (en) * 2009-08-31 2011-03-17 Pioneer Electronic Corp Elevated-road identifying device, method of identifying elevated road, and elevated-road identification program
CN108802769A (en) * 2018-05-30 2018-11-13 千寻位置网络有限公司 Detection method and device of the GNSS terminal on overhead or under overhead
CN109101980A (en) * 2018-07-18 2018-12-28 上海理工大学 A kind of automobile weightlessness Detection & Controling method based on machine learning
WO2019000653A1 (en) * 2017-06-30 2019-01-03 清华大学深圳研究生院 Image target identification method and apparatus
US20200001874A1 (en) * 2018-12-29 2020-01-02 Baidu Online Network Technology (Beijing) Co., Ltd. Autonomous driving method and apparatus
CN110726414A (en) * 2019-10-25 2020-01-24 百度在线网络技术(北京)有限公司 Method and apparatus for outputting information
CN111221011A (en) * 2018-11-26 2020-06-02 千寻位置网络有限公司 GNSS positioning method and device based on machine learning
CN111310675A (en) * 2020-02-20 2020-06-19 上海赛可出行科技服务有限公司 Overhead identification auxiliary positioning method based on convolutional neural network

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02137100A (en) * 1988-11-18 1990-05-25 Nissan Motor Co Ltd Elevated express-highway travelling recognition device
DE4103378A1 (en) * 1990-02-16 1991-08-22 Volkswagen Ag Local traffic system - utilises suspension of road vehicles with catches on roofs of vehicles and provision for taking traction current from batteries under seats
JP2011053024A (en) * 2009-08-31 2011-03-17 Pioneer Electronic Corp Elevated-road identifying device, method of identifying elevated road, and elevated-road identification program
WO2019000653A1 (en) * 2017-06-30 2019-01-03 清华大学深圳研究生院 Image target identification method and apparatus
CN108802769A (en) * 2018-05-30 2018-11-13 千寻位置网络有限公司 Detection method and device of the GNSS terminal on overhead or under overhead
CN109101980A (en) * 2018-07-18 2018-12-28 上海理工大学 A kind of automobile weightlessness Detection & Controling method based on machine learning
CN111221011A (en) * 2018-11-26 2020-06-02 千寻位置网络有限公司 GNSS positioning method and device based on machine learning
US20200001874A1 (en) * 2018-12-29 2020-01-02 Baidu Online Network Technology (Beijing) Co., Ltd. Autonomous driving method and apparatus
CN110726414A (en) * 2019-10-25 2020-01-24 百度在线网络技术(北京)有限公司 Method and apparatus for outputting information
CN111310675A (en) * 2020-02-20 2020-06-19 上海赛可出行科技服务有限公司 Overhead identification auxiliary positioning method based on convolutional neural network

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116068604A (en) * 2023-02-13 2023-05-05 腾讯科技(深圳)有限公司 Fusion positioning method, fusion positioning device, computer equipment, storage medium and program product

Similar Documents

Publication Publication Date Title
JP5064870B2 (en) Digital road map generation method and map generation system
CN102099656B (en) Method for updating a geographic database for a vehicle navigation system
CN101361106B (en) Traffic information providing system using digital map for collecting traffic information and method thereof
US8213682B2 (en) Feature information collecting apparatuses, methods, and programs
CN110837092B (en) Method and device for vehicle positioning and lane-level path planning
KR101210597B1 (en) Method and Apparatus for Map Matching of Moving Objects
KR101921429B1 (en) Method and system for making precise map
CN111102988A (en) Map-based path planning method, server, vehicle-mounted terminal, and storage medium
CN107270925B (en) User vehicle navigation system, device and method
CN107917716B (en) Fixed line navigation method, device, terminal and computer readable storage medium
US20240142242A1 (en) Route deviation quantification and vehicular route learning based thereon
CN109086278A (en) A kind of map constructing method, system, mobile terminal and storage medium for eliminating error
CN110830915B (en) Method and device for determining starting point position
WO2018179956A1 (en) Parking lot information management system, parking lot guidance system, parking lot information management program, and parking lot guidance program
CN114199262A (en) Method for training position recognition model, position recognition method and related equipment
CN117152958A (en) Vehicle track recognition method, device, recognition equipment and readable storage medium
CN114430801A (en) Terrain-sensitive route planning
CN114509087B (en) Positioning method, electronic device and computer storage medium
CN115905449A (en) Semantic map construction method and automatic driving system with familiar road mode
CN113790725B (en) Path planning method, path planning device, storage medium and mobile device
CN111486858B (en) Road network prediction tree construction method and device, electronic equipment and storage medium
CN117705141B (en) Yaw recognition method, yaw recognition device, computer readable medium and electronic equipment
CN113393694B (en) Bus backbone line grabbing method
US20240094411A1 (en) Global positioning system bias detection and reduction
CN116108288A (en) Method and system for searching optimal get-on and get-off points in taxi taking service

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination