CN115585820A - Yaw guiding method, device, electronic equipment and computer program product - Google Patents

Yaw guiding method, device, electronic equipment and computer program product Download PDF

Info

Publication number
CN115585820A
CN115585820A CN202211165806.7A CN202211165806A CN115585820A CN 115585820 A CN115585820 A CN 115585820A CN 202211165806 A CN202211165806 A CN 202211165806A CN 115585820 A CN115585820 A CN 115585820A
Authority
CN
China
Prior art keywords
navigation
yaw
target intersection
different
sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211165806.7A
Other languages
Chinese (zh)
Inventor
杨夕凯
叶熙
贾晓婷
张俊
张翔
马同辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202211165806.7A priority Critical patent/CN115585820A/en
Publication of CN115585820A publication Critical patent/CN115585820A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The embodiment of the disclosure discloses a yaw guiding method, a yaw guiding device, electronic equipment and a computer program product, wherein the method comprises the following steps: acquiring running data of a navigated object in front of a target intersection; extracting a driving feature and a navigation direction of the navigated object in front of the target intersection based on the driving data; predicting the yaw behavior of the navigated object based on the navigation direction, the driving characteristics and a yaw prediction model corresponding to the target intersection; and guiding the navigated object to drive in the correct direction when the yaw behavior of the navigated object is predicted. The technical scheme can guide the navigated object to drive into the correct driving direction in time, reduce the yaw probability of the navigated object and avoid the situations of detour and the like of the navigated object.

Description

Yaw guiding method, device, electronic equipment and computer program product
Technical Field
The present disclosure relates to the field of navigation technologies, and in particular, to a yaw guiding method, a yaw guiding device, an electronic device, and a computer program product.
Background
In a travel scene, the map broadcasting guide service plays an important role in navigation application. The user is more concerned about how the road ahead should be driven in the navigation process, for example, how many meters later the user needs to turn, change lanes and the like. However, the user is inevitably distracted, chatted or disturbed by other scenes during driving, so that the intersection is missed, and a detour result is caused.
Therefore, a solution is needed to prompt the user in time when the user has a yaw tendency, and guide the user to drive the vehicle to a correct navigation direction, so as to reduce the yaw probability of the user.
Disclosure of Invention
The embodiment of the disclosure provides a yaw guiding method, a yaw guiding device, electronic equipment and a computer program product.
In a first aspect, an embodiment of the present disclosure provides a yaw guidance method, including:
acquiring running data of a navigated object in front of a target intersection;
extracting a driving feature and a navigation direction of the navigated object in front of the target intersection based on the driving data;
predicting the yaw behavior of the navigated object based on the navigation direction, the driving characteristics and a yaw prediction model corresponding to the target intersection;
and guiding the navigated object to drive in the correct direction when the yaw behavior of the navigated object is predicted.
Further, the driving data comprises current navigation data of the navigated object in front of the target intersection and generated track data of the navigated object under navigation of the current navigation data; extracting the driving characteristics and the navigation direction of the navigated object in front of the target intersection based on the driving data, comprising:
determining a navigation direction of the sample navigation object in front of the target intersection based on the current navigation data;
determining a correspondence between a speed of the sample navigation object and a distance of the sample navigation object relative to the target intersection based on the generated trajectory data.
Further, the yaw prediction model comprises a speed curve model of speed and distance obtained by pre-fitting training; predicting the yaw behavior of the navigated object based on the navigation direction, the driving characteristics and a yaw prediction model corresponding to the target intersection, including:
determining yaw behavior of the navigated object based on the correspondence of speed to distance and a degree of match between the speed versus distance speed profile models.
Further, when the fact that the navigated object has the yaw behavior is predicted, guiding the navigated object to travel in a correct direction comprises:
guiding the navigated object to drive in a correct navigation direction by adopting different guiding modes based on the probability of yaw behavior of the navigated object; the different guide modes comprise one or more of different sound effects, different animations and different voice playing contents.
Further, acquiring the driving data of the navigated object in front of the target intersection comprises:
when the navigated object meets a yaw prediction condition, acquiring a prediction related position in front of the target intersection, wherein the prediction related position is included in the yaw prediction model;
and acquiring the driving data of the navigated object at preset time intervals based on the predicted relevant positions.
In a second aspect, an embodiment of the present invention provides a yaw prediction model training method, where the method includes:
acquiring running data of a sample navigation object in front of a target intersection;
extracting, based on the travel data, travel characteristics of a sample navigation object traveling in different navigation directions in front of the target intersection;
training yaw prediction models in different navigation directions based on the driving characteristics in the different navigation directions so that the yaw prediction models can predict yaw behaviors of the navigated object in front of the target intersection.
Further, extracting travel characteristics of sample navigation objects traveling in different navigation directions in front of the target intersection based on the travel data, comprising:
determining a navigation direction of the sample navigation object in front of the target intersection based on the driving data;
determining distribution information of the sample navigation object driving characteristics;
determining a driving characteristic of the sample navigation object in the same navigation direction based on the distribution information.
Further, the driving data comprises actual track data generated by the sample navigation object under the navigation of historical navigation data in front of the target intersection; determining distribution information of the sample navigation object driving characteristics, comprising:
determining a speed and a lateral offset of the sample navigation object when the sample navigation object travels at different preset distances in front of the target sample intersection based on the actual trajectory data;
and determining distribution information of the speed and the lateral offset at different preset distances in the same navigation direction based on the speed and the lateral offset corresponding to a plurality of sample navigation objects in the same navigation direction.
Further, the driving characteristics include speeds and lateral offsets of the sample navigation object at different preset distances during driving in front of the target intersection; training yaw prediction models in different navigation directions based on the driving characteristics in the different navigation directions so that the yaw prediction models can predict yaw behaviors of the navigated object in front of the target intersection, comprising:
fitting based on the speeds of the sample navigation object at different preset distances in front of the target intersection in different navigation directions to obtain speed curve models of the speeds and the distances in different navigation directions;
fitting based on the lateral deviation of the sample navigation object in different navigation directions at different preset distances in front of the target intersection to obtain displacement curve models of the lateral deviation and the distance in different navigation directions;
determining predicted relevant positions in front of the target intersection in different navigation directions based on the speed curve model and the displacement curve model; when the yaw behavior of the navigated object is predicted, the driving data of the navigated object is collected based on the prediction related position, and the yaw behavior is predicted.
Further, determining a predicted relevant position in front of the target intersection in different navigation directions based on the speed curve model and the displacement curve model comprises:
determining speed similarity curves between the speed curve models in different navigation directions and determining displacement similarity curves between the displacement curve models in different navigation directions;
determining a first predicted position based on an inflection point on the velocity similarity curve and a second predicted position based on an inflection point on the displacement similarity curve;
determining a predicted relevant location based on the first predicted location and the second predicted location.
Further, the driving characteristics further include historical yaw information of the sample navigation object; the method further comprises the following steps:
determining a speed similarity curve between the speed curve models in different navigation directions;
determining whether the target intersection is suitable for a prediction of yaw behavior based on the speed similarity curve, the displacement curve model, and the historical yaw information.
In a third aspect, an embodiment of the present invention provides a yaw guide apparatus, including:
the navigation device comprises a first acquisition module, a second acquisition module and a navigation module, wherein the first acquisition module is configured to acquire the driving data of a navigated object in front of a target intersection;
a first extraction module configured to extract a travel feature and a navigation direction of the navigated object in front of the target intersection based on the travel data;
a prediction module configured to predict yaw behavior of the navigated object based on the navigation direction, the driving characteristics, and a yaw prediction model corresponding to the target intersection;
a guiding module configured to guide the navigated object to travel in a correct direction when yaw behavior of the navigated object is predicted to occur.
In a fourth aspect, an embodiment of the present invention provides a yaw prediction model training apparatus, including:
a second acquisition module configured to acquire travel data of a sample navigation object before a target intersection;
a second extraction module configured to extract travel features of sample navigation objects traveling in different navigation directions ahead of the target intersection based on the travel data;
a training module configured to train yaw prediction models in different navigation directions based on the driving characteristics in the different navigation directions so that the yaw prediction models can predict yaw behavior of the navigated object in front of the target intersection.
The functions can be realized by hardware, and can also be realized by hardware and corresponding software. The hardware or software includes one or more modules corresponding to the above-described functions.
In one possible design, the apparatus includes a memory configured to store one or more computer instructions that support the corresponding method, and a processor configured to run the computer instructions stored in the memory. The apparatus may also include a communication interface for the apparatus to communicate with other devices or a communication network.
In a fifth aspect, an embodiment of the present disclosure provides an electronic device, including a memory, a processor, and a computer program stored on the memory, wherein the processor walks through the computer program to implement the method of any one of the above aspects.
In a sixth aspect, the disclosed embodiments provide a computer-readable storage medium for storing computer instructions for use by any of the above apparatus, the computer instructions when executed by a processor, for implementing the method of any of the above aspects.
In a seventh aspect, the disclosed embodiments provide a computer program product comprising computer instructions for implementing the method of any one of the above aspects when the computer instructions are executed by a processor.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the embodiment of the disclosure aims at a navigated object driven by a target intersection, acquires driving data of the navigated object in front of the target intersection, extracts driving characteristics and a navigation direction of the navigated object in front of the target intersection from the driving data, predicts yaw behavior of the navigated object based on the navigation direction, the driving characteristics and a yaw prediction model corresponding to the pre-trained target intersection, and guides the navigated object to drive into a correct driving direction when the yaw behavior of the navigated object is predicted. By the mode, when the navigated object is deflected due to self reasons or external influence in the navigation process, the navigated object can be guided to drive into the correct driving direction in time, the yaw probability of the navigated object is reduced, and the situations of detour and the like of the navigated object are avoided.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
Other features, objects, and advantages of the present disclosure will become more apparent from the following detailed description of non-limiting embodiments when taken in conjunction with the accompanying drawings. In the drawings:
FIG. 1 shows a flow chart of a yaw guidance method according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow diagram of a yaw prediction model training method according to an embodiment of the present disclosure;
FIG. 3 illustrates a schematic view of inflection point selection on a transverse curve according to an embodiment of the present disclosure;
FIG. 4 illustrates a schematic diagram of lateral offset intervals in different navigation directions according to an embodiment of the present disclosure;
FIG. 5A illustrates a graph of a median curve of lateral shift intervals in different navigation directions according to an embodiment of the present disclosure;
FIG. 5B shows a displacement similarity curve diagram according to an embodiment of the present disclosure;
FIG. 6 shows a block diagram of a yaw guide apparatus according to an embodiment of the present disclosure;
FIG. 7 is a block diagram of a yaw prediction model training apparatus according to an embodiment of the present disclosure;
FIG. 8 is a schematic block diagram of an electronic device suitable for use in implementing a yaw guidance method and/or a yaw prediction model training method according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art can easily implement them. Also, for the sake of clarity, parts not relevant to the description of the exemplary embodiments are omitted in the drawings.
In the present disclosure, it is to be understood that terms such as "including" or "having," etc., are intended to indicate the presence of the disclosed features, numbers, steps, actions, components, parts, or combinations thereof, and do not preclude the possibility that one or more other features, numbers, steps, actions, components, parts, or combinations thereof are present or added.
It should be further noted that the embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
The details of the embodiments of the present disclosure are described in detail below with reference to specific embodiments.
Fig. 1 shows a flow chart of a yaw guidance method according to an embodiment of the present disclosure. As shown in fig. 1, the yaw guiding method comprises the steps of:
in step S101, acquiring traveling data of a navigated object in front of a target intersection;
in step S102, a travel feature and a navigation direction of the navigated object in front of the target intersection are extracted based on the travel data;
in step S103, predicting yaw behavior of the navigated object based on the navigation direction, the driving characteristics, and a yaw prediction model corresponding to the target intersection;
in step S104, when it is predicted that the navigated object has a yaw behavior, the navigated object is guided to travel in the correct direction and the navigated object is guided to travel in the correct direction.
In this embodiment, the yaw guidance method may be executed on a navigation terminal or a navigation server. The navigated object can be any object currently traveling in front of the target intersection, such as a user, a smart driving vehicle, a smart robot, and the like. The target intersection can be any intersection which is trained in advance to obtain a yaw prediction model. Intersections can be defined in the road network data in advance, and are usually intersections with branches.
The embodiment can perform yaw prediction for a navigated object traveling in front of a target intersection. In the yaw prediction, the driving data of the navigated object in front of the target intersection can be acquired. The driving data may include, but is not limited to, a real driving trajectory of the navigated object (acquired upon obtaining authorization of the navigated object), navigation data output by the navigation service to the navigated object, and the like.
After the travel data of the navigated object is acquired, the travel characteristics and navigation directions of the navigated object may be extracted based on the travel data. The driving characteristics may include, but are not limited to, speed of the navigated object during driving in front of the target intersection, distance from the target intersection, lateral offset on the road in front of the target intersection, and the like. In the case of a road driving direction being understood as a vertical direction, a lateral offset may be understood as an offset of the navigated object in the lateral direction of the road, for example an offset of the navigated object with respect to a certain edge or center line of the road. The navigation direction may be a direction in which the navigated object should travel before the target intersection, that is, a correct traveling direction of the navigated object before the target intersection, and the navigation direction may include, but is not limited to, driving out to the right, driving out to the left, going straight, and the like.
In some embodiments, a corresponding yaw prediction model is obtained by training the target intersection in advance, and the yaw prediction model can predict the yaw behavior of an object driving in different navigation directions in front of the target intersection. In some embodiments, different yaw prediction models may be trained for different navigation directions, a corresponding yaw prediction model may be selected based on the navigation direction of the navigated object, and then, using the yaw prediction model, it may be predicted whether yaw behavior exists in the navigated object based on the driving characteristics of the navigated object, for example, the navigation direction of the navigated object should be left right, but the yaw prediction model corresponding to left right determines that the navigated object does not actually go left right or has a low possibility of going right based on the driving characteristics of the navigated object, and at this time, it may be predicted that yaw behavior exists in the navigated object.
In some embodiments, it may be determined whether yaw behavior of the navigated object exists by matching a degree of match between the driving characteristics of the navigated object and the yaw prediction model.
In some embodiments, the driving characteristics may also include, but are not limited to, characteristics of speed and/or lateral offset of the navigated object under different circumstances (such as weather conditions, time of day, road congestion conditions, etc.) and different road conditions in front of the target intersection (such as road grade in front of the target intersection, road structure, number of lanes, navigation segment length, confusion, etc.). In the driving characteristics, characteristics such as the speed and/or lateral deviation of the object to be navigated have a correspondence relationship with the relative distance between the object to be navigated and the target intersection.
In some embodiments, the lateral offset of the navigated object may be determined based on the vertical distance of the navigated object from the road centerline in front of the target intersection at a preset distance in front of the target intersection. The side line of the road before the target intersection is known, the center line can be determined based on the coordinates of each point on the side line, and the position of the navigated object at the preset distance can also be determined based on the trajectory data, so the lateral offset of the navigated object can be determined based on the position of the navigated object at the preset distance and the position coordinates of the center line of the road before the target intersection.
In some embodiments, the yaw prediction model may be a fitted curve model between speed and distance, i.e. the yaw prediction model may be a curve model based on the speed of the sample navigation object before the target intersection and the distance between the sample navigation object and the target intersection. Of course, the yaw prediction model may also be a more complex model, for example, a model trained by learning multiple characteristics of the speed, the direction angle, the lateral offset, and the like of the sample navigation object at different distances in front of the target intersection, such as a neural network model and the like. The yaw prediction models can be obtained by respectively training based on different road conditions and/or different road environments, that is, different road conditions and/or different environments can correspondingly train different yaw prediction models, and in the actual prediction process, the corresponding yaw prediction models can be selected for prediction based on the currently predicted road conditions and road environments. For example, different yaw prediction models can be obtained by training in different navigation directions for the same target intersection, and different target intersections can also correspond to different yaw prediction models; of course, multiple target intersections with similar or identical road conditions may correspond to the same yaw prediction model in the same navigation direction.
The embodiment of the disclosure aims at a navigated object driven by a target intersection, acquires driving data of the navigated object in front of the target intersection, extracts driving characteristics and a navigation direction of the navigated object in front of the target intersection from the driving data, predicts yaw behavior of the navigated object based on the navigation direction, the driving characteristics and a yaw prediction model corresponding to the pre-trained target intersection, and guides the navigated object to drive into a correct driving direction when the yaw behavior of the navigated object is predicted. By the aid of the method, when the navigated object is subjected to yaw due to self reasons or external influences in the navigation process, the navigated object can be guided to drive into the correct driving direction in time, the yaw probability of the navigated object is reduced, and the situation that the navigated object bypasses is avoided.
In an optional implementation manner of this embodiment, the driving data includes current navigation data of the navigated object before the target intersection and generated track data of the navigated object under navigation of the current navigation data; step S102, which is a step of extracting the driving characteristics and the navigation direction of the navigated object in front of the target intersection based on the driving data, further includes the steps of:
determining a navigation direction of the sample navigation object in front of the target intersection based on the current navigation data;
determining a correspondence between a speed of the sample navigation object and a distance of the sample navigation object relative to the target intersection based on the generated trajectory data.
In this alternative implementation, the yaw prediction is performed during the navigation process of the navigated object, and thus the acquired driving data of the navigated object may include, but is not limited to, the current navigation data output to the navigated object by the navigation server, and may also include the trajectory data generated by the navigated object over a period of time. The current navigation data may include, but is not limited to, directions indicating that the navigated object should travel, such as driving out to the right, driving out to the left, or going straight, etc.
The navigated object travels before the target intersection based on the current navigation data, and the trajectory data can be generated during the travel process, wherein the generated trajectory data can include but is not limited to the speed of the navigated object at each trajectory point, the distance between the navigated object and the target intersection at each trajectory point can be determined based on the position information of each trajectory point and the position information of the target intersection, and the corresponding relationship between the speed of the navigated object and the distance between the navigated object and the target intersection can be further determined.
In some embodiments, the velocities at the trajectory points that the navigated object has generated may be obtained at preset time intervals, for example, the velocities at one trajectory point may be obtained every second. In order to accurately predict the yaw behavior of the navigated object, after obtaining the velocities at the sufficient number of trajectory points, the yaw behavior of the navigated object may be predicted using the yaw prediction model based on the velocities at the sufficient number of trajectory points and the corresponding distances.
In an optional implementation manner of this embodiment, the yaw prediction model includes a speed curve model of speed and distance obtained by pre-fitting training; step S103, namely, the step of predicting the yaw behavior of the navigated object based on the navigation direction, the driving characteristics, and the yaw prediction model corresponding to the target intersection, further comprises the steps of:
determining yaw behavior of the navigated object based on the correspondence of speed to distance and a degree of match between the speed versus distance speed profile models.
In the optional implementation manner, speed curve models of speeds and distances in different navigation directions can be obtained in advance based on the running characteristic fitting training of the sample navigation object in front of the target intersection, and the curve models are speed curves of the sample navigation object at each preset distance in front of the target intersection. In the actual navigation process, after the corresponding relationship between the speed and the distance of the navigated object in front of the target intersection is determined, the speed curve model of the speed and the distance corresponding to the correct driving direction of the navigated object (i.e. the navigation direction indicated in the current navigation data) can be matched based on the corresponding relationship, if the matching degree is high, the possibility that the navigated object drifts can be considered to be low, and if the matching degree is low, the possibility that the navigated object drifts can be considered to be high. In other embodiments, the speed curve model of speed and distance corresponding to the wrong driving direction of the navigated object (i.e., the navigation direction not indicated in the current navigation data) may also be matched based on the correspondence, and if the degree of matching is high, the navigated object may be considered to have a high possibility of yawing, and if the degree of matching is low, the navigated object may be considered to have a low possibility of yawing. In other embodiments, a first degree of matching of the speed and distance speed curve model corresponding to the correct driving direction and a second degree of matching of the speed and distance speed curve model corresponding to the wrong driving direction may be considered, for example, by weighting the first degree of matching and the second degree of matching to obtain a final degree of matching, and determining the yaw behavior of the navigated behavior based on the final degree of matching.
In an optional implementation manner of this embodiment, in step S104, that is, when the yaw behavior of the navigated object is predicted, the step of guiding the navigated object to travel in the correct direction further includes the following steps:
guiding the navigated object to run according to a correct running direction by adopting different guiding modes based on the probability of the navigated object to generate a yaw behavior; the different guiding modes comprise one or more combinations of different sound effects, different animations and different voice playing contents.
In this optional implementation manner, the yaw prediction model may predict the probability of the yaw behavior of the navigated object, and in this embodiment, a differential guidance manner may be adopted, and for different probability values, different guidance manners are adopted. The guidance means may include, but is not limited to, a combination of one or more of different sound effects, different animations, and different voice playback content.
Multiple different sound effects, multiple different animations and multiple different voice playing contents can be preset, one or more combinations of one sound effect, one animation and one voice playing content are selected based on the probability obtained by prediction of the current yaw prediction model, and the current yaw behavior of the navigated object is reminded.
The following illustrates one differentiated guide:
Figure BDA0003861290070000081
in an optional implementation manner of this embodiment, in step S101, the step of acquiring the driving data of the navigated object in front of the target intersection further includes the following steps:
when the navigated object meets a yaw prediction condition, acquiring a predicted relevant position in front of the target intersection determined in the yaw prediction model;
acquiring driving data of the navigated object at preset time intervals based on the predicted relevant positions.
In this optional implementation manner, a yaw prediction condition may be preset, for example, whether a navigation segment in front of a current target intersection is suitable for yaw prediction (in a prediction stage, it may be determined based on similarities of driving features of sample navigation objects in different navigation directions, if the driving features of sample navigation objects in different navigation directions in front of a certain target intersection are similar, it is not possible to accurately distinguish whether the driving object in front of the target intersection is yawing, the navigation segment in front of the target intersection is not suitable for yaw prediction, if the driving features of sample navigation objects in different navigation directions in front of the target intersection are not similar in some cases, it is possible to accurately distinguish yawing behaviors of the driving object in front of the target intersection based on the dissimilarity, then the navigation segment in front of the target intersection is suitable for yaw prediction), and whether the navigation segment of the navigated object from the target intersection is greater than a certain distance (if the distance is too short, it is not enough to acquire enough driving data for prediction).
Under the condition that a yaw prediction condition is met, a predetermined prediction related position corresponding to the target intersection can be obtained, and the prediction related position is determined when a yaw prediction model is trained. In some embodiments, the predicted relevant location may include, but is not limited to, a start collection location, a start prediction location, and an end prediction location. The start collection position may be a position where collection of travel data can start, the start prediction position may be a position where prediction of yaw behavior can start, and the end prediction position may be a position where prediction needs to be ended. In some embodiments, the predicted relevant location can be expressed in terms of the distance of the navigated object relative to the target intersection, such as how many meters from the target intersection are the start collection location, the start prediction location, or the end prediction location, among others.
Different corresponding prediction related positions in front of the target intersection are defined in different yaw prediction models, when a yaw prediction condition is determined to be needed, the prediction related positions can be obtained firstly, and the driving data of the navigated object are collected at preset time intervals on the basis of the prediction related positions. For example, after the target object reaches the start collection position, collection of the travel data of the target object is started, and after the target object reaches the start prediction position, extraction of the travel characteristics based on the travel data collected before is started, and the yaw behavior of the target object is predicted based on the yaw prediction model.
FIG. 2 illustrates a flow diagram of a yaw prediction model training method according to an embodiment of the present disclosure. As shown in fig. 2, the yaw prediction model training method includes the following steps:
in step S201, acquiring driving data of a sample navigation object before a target intersection;
in step S202, extracting, based on the travel data, travel features of a sample navigation object traveling in a different navigation direction in front of the target intersection;
in step S203, a yaw prediction model in different navigation directions is trained based on the driving characteristics in different navigation directions, so that the yaw prediction model can predict the yaw behavior of the navigated object in front of the target intersection.
In this embodiment, the yaw prediction model training method may be executed on a navigation server. The sample navigation object may be any object that travels using navigation service navigation in front of a target intersection within a preset time range, and may be, for example, a user, an intelligent driving vehicle, an intelligent robot, or the like. The target intersection may be any intersection that requires training of a yaw prediction model. In some embodiments, during the course of the yaw prediction model training, the driving data may be acquired for a batch of target intersections, for example, corresponding driving data may be acquired for all intersections of a certain city or a certain area of a certain city, and then the driving data corresponding to each intersection may be grouped.
Each target intersection can obtain the running data of a plurality of sample navigation objects within a preset time range. In some embodiments, the driving data may include, but is not limited to, driving-related data, such as navigation data, trajectory data, and the like, generated by the sample navigation object driving within a preset distance before the target intersection within the preset time range. That is, the driving data may include, but is not limited to, a real driving track of the sample navigation object within a preset distance range before the target intersection by a preset distance (acquired on the premise of obtaining the authorization of the sample navigation object), navigation data output by the navigation service to the sample navigation object, and the like.
After the driving data of the sample navigation object is acquired, the driving characteristics of the sample navigation object can be extracted based on the driving data. When the driving characteristics of the sample navigation object are extracted, the driving characteristics of the sample navigation object that is driven in different navigation directions may be distinguished based on the different navigation directions. That is, the sample navigation objects may be grouped according to navigation directions, and then the sample navigation objects in the same navigation direction extract a set of driving features in the navigation direction, and the sample navigation objects in different navigation directions extract different sets of driving features. It is understood that the navigation direction mentioned herein refers to the navigation direction of the navigation data prompt in front of the target intersection.
In some embodiments, the sample navigation objects may be classified into two categories, normal driving and yaw driving, based on the actual driving direction and whether the navigation direction is the same. If the actual driving direction of the sample navigation object is consistent with the navigation direction in the process of navigation driving before the target intersection, the sample navigation object can be considered as a normal driving object, and if the actual driving direction is inconsistent with the navigation direction, the sample navigation object can be considered as a yaw driving object. For example, in the navigation process of the sample navigation object a, the navigation data prompts that the sample navigation object a needs to be driven out to the right before passing through the target intersection, and the sample navigation object a does not actually drive out to the right but always moves straight, so that the sample navigation object a does not drive out from the target intersection and misses the target intersection, and at this time, the sample navigation object a can be divided into yaw-driving sample navigation objects; in the navigation process of the sample navigation object B, the navigation data prompts the sample navigation object A to drive out to the right before passing through the target intersection, and the sample navigation object B actually drives out to the right and drives out from the target intersection, so that the sample navigation object B can be divided into normal driving sample navigation objects. The driving characteristics of the sample navigation objects in normal driving and yaw driving can be distinguished for subsequent model training.
In some embodiments, the driving characteristics may include, but are not limited to, speed of the sample navigation object during driving in front of the target intersection, distance from the target intersection, lateral offset on the road in front of the target intersection, and the like. In the case of a road driving direction being understood as a vertical direction, the lateral offset may be understood as an offset of the sample navigation object in a lateral direction of the road, for example, an offset of the sample navigation object with respect to a certain edge or a center line of the road. The navigation direction may be a direction in which the sample navigation object should travel when navigating in front of the target intersection, that is, a correct travel direction prompted by the sample navigation object in the navigation data before the target intersection, and the navigation direction may include, but is not limited to, right-side exit, left-side exit, straight travel, and the like.
In this embodiment, the yaw prediction model is trained by the extracted driving characteristics of the sample navigation object, so that the yaw prediction model can predict the yaw behavior of the navigated object driving in different navigation directions in front of the target intersection. In some embodiments, different yaw prediction models may be trained for different navigation directions, so that during online prediction, a corresponding yaw prediction model may be selected based on a navigation direction of a sample navigation object, and then, using the yaw prediction model, it may be predicted whether a yaw behavior exists in the sample navigation object based on a driving characteristic of the navigated object, for example, the navigation direction of the navigated object should be right-left-off, but the yaw prediction model corresponding to right-left-off determines that the navigated object does not actually right-left-off or has a low possibility of right-left-off based on the driving characteristic of the navigated object, and then, it may be predicted that the navigated object has a yaw behavior.
In some embodiments, the yaw prediction model may include, but is not limited to, a curve model between speed and distance, which may be a curve fit to the speed and distance relative to the target intersection in the travel characteristics of the sample navigation object.
For example, the curve model may be expressed in the form of the following curve function:
v=ax 4 +bx 3 +cx 2 +dx+e
wherein v represents velocity, x represents distance, and a, b, c, d, e represent model parameters to be trained.
Of course, it is understood that the yaw prediction model may also be a more complex model, such as a model trained by learning multiple characteristics of speed, direction angle, lateral offset, etc. of the sample navigation object at different distances in front of the target intersection, such as a neural network model, etc. The yaw prediction models can be obtained by respectively training based on different road conditions and/or different road environments, that is, different road conditions and/or different environments can correspondingly train different yaw prediction models, and in the actual prediction process, the corresponding yaw prediction models can be selected for prediction based on the currently predicted road conditions and road environments. For example, different yaw prediction models can be obtained by training in different navigation directions for the same target intersection, and different target intersections can also correspond to different yaw prediction models; of course, multiple target intersections with similar or identical road conditions may correspond to the same yaw prediction model in the same navigation direction. The method is not particularly limited and may be based on practical needs.
In some embodiments, the driving characteristics may further include, but are not limited to, characteristics of speed and/or lateral offset of the navigated object under different circumstances (such as weather conditions, time of day, road congestion conditions, etc.) in front of the target intersection, different road conditions (such as road grade in front of the target intersection, road structure, number of lanes, navigation segment length, confusion, etc.). In the driving characteristics, characteristics such as the speed and/or lateral deviation of the object to be navigated have a correspondence relationship with the relative distance between the object to be navigated and the target intersection. The driving characteristics can be used for training a yaw prediction model, so that when online prediction is carried out, the yaw prediction model can also predict whether the navigated object has a yaw behavior or not through the driving characteristics of the navigated object. It will be appreciated that the driving characteristics are not limited to the speeds, lateral offsets, etc. mentioned above, but may also include other characteristics such as steering angle, yaw rate, etc.
The embodiment of the disclosure aims at each target intersection, by collecting the running data of the sample navigation object which is navigated and runs in front of the target intersection within a preset time range, and extracting the running characteristics of the sample navigation object which is normally running and abnormally running in different navigation directions in front of the target intersection from the running data, and then training yaw prediction models in different navigation directions based on the running characteristics in different navigation directions, the yaw prediction models can predict the yaw behavior of the navigated object in front of the target intersection. By the method, the yaw prediction model can be trained on the basis of the running characteristics of the sample navigation object, so that when the navigated object is subjected to yaw due to self reasons or external influences in the navigation process, the navigated object can be guided to run into the correct running direction in time, the preference probability of the navigated object is reduced, and the situations of detour and the like of the navigated object are avoided.
In an optional implementation manner of this embodiment, the step S202 of extracting the driving characteristics of the sample navigation object driving in different navigation directions in front of the target intersection based on the driving data further includes the following steps:
determining a navigation direction of the sample navigation object in front of the target intersection based on the driving data;
determining a driving feature of the sample navigation object and distribution information of the driving feature in the same navigation direction.
In the optional implementation mode, the navigation direction of the sample navigation object in front of the target intersection can be determined through the driving data, the sample navigation object is divided into different groups based on the navigation direction, the same navigation direction corresponds to the same group of sample navigation objects, and different navigation directions correspond to different groups of sample navigation objects. As described above, the navigation direction of the sample navigation object is the correct driving direction of the sample navigation object before navigating through the target intersection, and can be determined by the navigation data output by the navigation service, for example.
After the sample navigation objects are grouped according to the navigation direction, the distribution information of the driving characteristics can be extracted for each group of sample navigation objects, and then the driving characteristics of the group of sample navigation objects are determined based on the distribution information. That is, the driving characteristics of the sample navigation objects in the same navigation direction are a comprehensive expression of the driving characteristics of the set of sample navigation objects.
In some embodiments, for each of each group of sample navigation objects, a corresponding driving feature may be extracted, and then distribution information of the sample navigation objects in the navigation direction is determined according to the driving features of the group of sample navigation objects, where the distribution information may be, for example, hotspot distribution information. If the driving characteristic is a speed characteristic, determining the hotspot distribution of the speeds of the sample navigation objects in the same navigation direction at a plurality of preset distances in front of the target intersection according to the hotspot distribution information, and determining the speed characteristic of the group of sample navigation objects based on the hotspot distribution, namely the speed characteristic of the sample navigation objects in the navigation direction. By the method, the driving characteristics of the sample navigation objects in the same navigation direction can be comprehensively considered, so that more accurate driving characteristics capable of representing the sample navigation objects in the navigation direction can be found.
In some embodiments, the driving characteristics may include, but are not limited to, a combination of one or more of a speed, yaw information, and lateral offset of the sample navigation object in different scenarios. Different scenes may include, but are not limited to, scenes formed from combinations of one or more of different road conditions, different environments, and the like. The speed, the lateral offset, the yaw information and the like of the sample navigation object can be obtained through the trajectory data actually generated by the sample navigation object in the navigation process and the navigation data output by the navigation server. The yaw information may include information whether the sample navigation object yawed while navigating ahead of the target intersection.
In an optional implementation manner of this embodiment, the driving data includes historical navigation data of the sample navigation object before the target intersection; the step of determining a navigation direction of the sample navigation object in front of the target intersection based on the travel data further comprises the steps of:
determining a navigation direction of the sample navigation object in front of the target intersection based on the historical navigation data.
In this alternative implementation, the sample navigation object is an object that navigates through the target intersection within a preset time range, and thus the driving data of the sample navigation object may include, but is not limited to, historical navigation data that the navigation server outputs to the sample navigation object within the preset time range. The historical navigation data may include, but is not limited to, directions indicating that the sample navigation object should travel, such as right-hand drive-out, left-hand drive-out, straight travel, or the like. Thus, the navigation direction of the sample navigation object in front of the target intersection can be obtained from the historical navigation data.
In an optional implementation manner of this embodiment, the driving data includes actual track data generated by the sample navigation object before the target intersection under navigation of historical navigation data; the step of determining the distribution information of the sample navigation object driving characteristics further comprises the following steps:
determining the speed and the transverse offset of the sample navigation object when the sample navigation object runs at different preset distances in front of the target sample intersection based on the actual track data;
and determining distribution information of the speed and the lateral offset at different preset distances in the same navigation direction based on the speed and the lateral offset corresponding to a plurality of sample navigation objects in the same navigation direction.
In this alternative implementation, as described above, the sample navigation objects include a plurality of objects, and the actual trajectory data of each sample navigation object may be used to determine the driving characteristics, such as speed and lateral offset, of the sample navigation object before the target intersection. The actual trajectory data may include, but is not limited to, position information and speed of the sample navigation object at each trajectory point, and based on the position information of each trajectory point and the position information of the target intersection, the distance between the navigated object and the target intersection at each trajectory point may be determined, and then the correspondence between the speed of the sample navigation object and the distance of the sample navigation object relative to the target intersection may be determined.
For a plurality of sample navigation objects in the same navigation direction, determining distribution information of the sample navigation objects in the navigation direction based on respective corresponding speed, lateral offset and other driving characteristics; the distribution information may be, for example, hotspot distribution information.
In an optional implementation manner of this embodiment, the driving characteristics include speeds and lateral offsets of the sample navigation object at different preset distances during driving before the target intersection; the step of training yaw prediction models in different navigation directions based on the driving characteristics in different navigation directions so that the yaw prediction models can predict the yaw behavior of the navigated object in front of the target intersection further comprises the steps of:
fitting based on the speeds of the sample navigation object at different preset distances in front of the target intersection in different navigation directions to obtain speed curve models of the speeds and the distances in different navigation directions;
fitting based on the lateral deviation of the sample navigation object in different navigation directions at different preset distances in front of the target intersection to obtain displacement curve models of the lateral deviation and the distance in different navigation directions;
determining predicted relevant positions in front of the target intersection in different navigation directions based on the speed curve model and the displacement curve model; when the yaw behavior of the navigated object is predicted, the driving data of the navigated object is collected based on the prediction related position, and the yaw behavior is predicted.
In this optional implementation manner, the yaw prediction models may be trained respectively for different navigation directions in front of the target intersection, the same navigation direction of the same target intersection may correspond to the same yaw prediction model, and different navigation directions of the same target intersection may correspond to different yaw prediction models.
In some embodiments, during the yaw prediction model training process, driving characteristics can be extracted based on the driving data of the sample navigation data, which may include, but are not limited to, the speed and lateral offset of the sample navigation object at different preset distances during the navigation process in the target intersection. The different preset distances refer to the distances of the sample navigation object relative to the target intersection. In some embodiments, the driving data may be collected within a preset distance range, such as within 1 km, relative to the target intersection during the day, and the preset distance may be a position within the preset distance range at intervals, such as a preset distance set every 20 meters from 1 km. In this way, a plurality of speeds and lateral offset information at a plurality of preset distances can be extracted from the travel data.
For the speeds of the plurality of sample navigation objects in the same navigation direction at the preset distances, the final speed at each preset distance may be determined based on the speed distribution information, and the final speed at each preset distance may be the speed at the preset distance that is found by the speed distribution information of each sample navigation object at the preset distance and most represents the majority of sample navigation objects. The velocity and lateral offset at each preset distance in all navigation directions can be obtained in the same way.
And fitting to obtain a speed curve model in the navigation direction based on the speed of each preset distance in the same navigation direction, and fitting to obtain a displacement curve model in the navigation direction based on the transverse deviation of each preset distance in the same direction.
Further, when the yaw behavior is predicted online, it is possible to start collecting the travel data from a prediction-related position set in advance and start predicting the yaw behavior. The predicted relevant positions can be determined based on a speed curve model and a displacement curve model when a yaw prediction model is trained, and different navigation directions can correspond to different predicted relevant positions.
In some embodiments, the predicted relevant locations may include, but are not limited to, a start collection location, a start prediction location, and an end prediction location. The start collection position may be a position where collection of travel data can start, the start prediction position may be a position where prediction of yaw behavior can start, and the end prediction position may be a position where prediction needs to be ended. In some embodiments, the predicted relevant location can be expressed in terms of the distance of the sample navigation object relative to the target intersection, such as how many meters from the target intersection are the start collection location, the start prediction location, or the end prediction location, etc.
In some embodiments, the predicted relevant position ahead of the target intersection may be defined in a yaw prediction model that does not navigate the direction. In predicting yaw behavior, travel data for the navigated object may be acquired based on the predicted relevant positions. For example, when the target object reaches the start collection position, collection of the travel data of the target object is started, when the target object reaches the start prediction position, extraction of the travel characteristics based on the travel data collected before is started, and the yaw behavior of the target object is predicted based on the yaw prediction model.
The lateral offset of the sample navigation object may be determined based on a vertical distance of the sample navigation object from a center line of a road ahead of the target intersection at the preset distance. The sidelines of the road ahead of the target intersection are known, the center line can be determined based on the coordinates of each point on the sidelines, and the position of the sample navigation object at the preset distance can also be determined based on the trajectory data, so that the lateral offset can be determined based on the position of the sample navigation object at the preset distance and the position coordinates of the center line of the road ahead of the target intersection.
In an optional implementation manner of this embodiment, the step of determining the predicted relevant position in front of the target intersection in different navigation directions based on the speed curve model and the displacement curve model further includes the following steps:
determining speed similarity curves between the speed curve models in different navigation directions and determining displacement similarity curves between the displacement curve models in different navigation directions;
determining a first predicted position based on an inflection point on the velocity similarity curve and a second predicted position based on an inflection point on the displacement similarity curve;
determining a predicted relevant location based on the first predicted location and the second predicted location.
In the optional implementation mode, corresponding speed curve models and displacement curve models can be trained for the target intersection in different navigation directions. In order to compare the characteristics of sample navigation objects traveling in different navigation directions in both the speed and lateral offset dimensions, a determination may be made by comparing the similarity between the speed curve models in the different navigation directions and the similarity between the displacement curve models in the different navigation directions.
For example, if the different navigation directions include straight traveling and right-driving-out, a speed similarity curve between a speed curve model corresponding to the straight traveling and a speed curve model corresponding to the right-driving-out may be determined, and then the difference in speed between the straight traveling and the right-driving-out navigation objects may be determined based on the speed similarity curve.
Based on the inflection points on the speed similarity curve and the displacement similarity curve, the navigation objects in different navigation directions can be understood as position nodes which are changed from similar to dissimilar in driving characteristics. The inflection point appears on the curve as a maximum point or a minimum point. Therefore, the prediction related position can be determined based on the inflection points on the velocity similarity curve and the displacement similarity curve.
An implementation of determining the predicted relevant position based on the inflection point is illustrated below, and a straight-ahead driving and a right-hand driving are taken as examples below, and it is understood that different navigation directions are not limited to the straight-ahead driving and the right-hand driving.
An inflection point set can be found according to the speed similarity curves of straight driving and driving out from the right, and when the inflection point meets the following conditions, the inflection point is recorded as a first predicted position:
1. the inflection point is a downward inflection point (vertex) within 200-1000 meters relative to the target intersection, the slope of the inflection point is greater than a preset slope (for example, the calculation method of 0.3,0.3 is that the abscissa is 100 meters as 1 unit value, which is equivalent to that within 100 meters, the similarity change exceeds 0.3 to meet, namely, (y 2-y 1)/((x 2-x 1)/100) >0.3, (x 1, y 1), (x 2, y 2) are two adjacent points on the speed similarity curve).
2. The number of corners in the corner set is greater than 1 and less than 3 (if too many corners are present, the speed fluctuation is too large to be suitable as a predicted position).
3. The inflection point with the largest slope (the similarity variation is relatively large) in the set of inflection points is taken as the final inflection point, the abscissa of the final inflection point is the first predicted position, and the inflection point at 400 meters as shown in fig. 3 is selected as the final inflection point.
The second predicted position is similar to the first predicted position in the selection principle, except that the second predicted position is based on the displacement similarity curve, which is not described herein again.
An acquisition implementation for determining the lateral offset and the inflection point on the displacement curve model is illustrated below:
in this example, the determination of the lateral offset is still described by taking straight travel and right-hand travel as examples. The lateral offset of the different navigation directions may be set to one section, as shown in fig. 4, two directions, i.e. performing and driving right out, correspond to two sections, 4 curves.
The horizontal deviation interval is obtained according to the position coordinate data of the sample navigation object in the same direction of the target intersection, the horizontal coordinate can be the distance relative to the target intersection, and the vertical coordinate can be the difference between the position coordinate point of the sample navigation object and the position coordinate point of the road center line in front of the target intersection.
And calculating a median curve in the interval in each direction based on the two sideline curves in the interval corresponding to each direction to obtain two curves for straight running and driving out to the right as shown in fig. 5A, namely displacement curve models in the directions for straight running and driving out to the right.
And calculating the difference between the two displacement curve models in the straight line direction and the right driving-out direction to obtain a displacement similarity curve, as shown in fig. 5B.
For the principle of the inflection point calculation and filtering shown in fig. 5B, refer to the description of the velocity similarity curve above, except that the inflection point of the upward trend (i.e., the bottom point) is taken for the displacement curve model instead of the inflection point of the downward trend.
According to the first predicted position and the second predicted position, a predicted reference position can be comprehensively calculated, for example, the predicted reference position can be selected from the first word position and the second predicted position which are farther away relative to the target intersection as the predicted reference position.
In some embodiments, the predicted relevant location may include, but is not limited to, a start predicted location, a start collection location, and/or a stop predicted location.
After the prediction reference position is determined, the prediction-related position may be determined based on the prediction reference position.
For example, the start prediction position is a position where the prediction reference position is increased by a certain distance, that is, the start prediction position is a certain distance farther from the target intersection than the prediction reference position, and the purpose is to start prediction a few seconds earlier. The starting position may be a position which is increased by a certain distance from the starting position to be predicted, in order to collect the driving data a few seconds before the starting position to collect enough driving data for extracting the driving characteristics when the starting position to be predicted is collected, and then the yaw prediction model is used for prediction. The ending prediction location may be set at a preset distance from the target intersection, such as 200 meters from the target intersection.
In an optional implementation manner of this embodiment, the driving characteristics further include historical yaw information of the sample navigation object; the method further comprises the steps of:
determining a speed similarity curve between the speed curve models in different navigation directions;
determining whether the target intersection is suitable for a prediction of yaw behavior based on the speed similarity curve, the displacement curve model, and the historical yaw information.
In the optional implementation mode, after the speed curve models are obtained in the model training process, the similarity between the speed curve models in different navigation directions can be calculated, and if the different speed curve models are similar, the navigated object in front of the target intersection can be difficult to distinguish the actual driving direction in front of the target intersection by adopting the speed curve models; at this time, the displacement curve model and the historical yaw information can be further adopted for further confirmation, if the displacement curve models of the navigated object in different navigation directions are similar and/or the historical yaw rate of the navigated object in front of the target intersection is low, the target intersection can be considered to be not suitable for the prediction of yaw behaviors, and the target intersection can be marked as an intersection without yaw prediction. When the navigated object actually uses the navigation service, the yaw behavior can not be predicted when the navigated object passes through the target intersection; and for the target crossing which does not meet the condition, the target crossing can be marked to be suitable for the prediction of the yawing behavior, and when the navigated object actually uses the navigation service, the yawing behavior can be predicted when the navigated object passes through the target crossing. The historical yaw information of the sample navigation object may be a comprehensive expression of the historical yaw information of the plurality of sample navigation objects before the target intersection, for example, may be an average yaw rate of the plurality of sample navigation objects.
The following are embodiments of the disclosed apparatus that may be used to perform embodiments of the disclosed methods.
Fig. 6 shows a block diagram of a yaw guide apparatus according to an embodiment of the present disclosure. The apparatus may be implemented as part or all of an electronic device through software, hardware, or a combination of both. As shown in fig. 6, the yaw guide apparatus includes:
a first acquisition module 601 configured to acquire driving data of a navigated object in front of a target intersection;
a first extraction module 602 configured to extract a driving feature and a navigation direction of the navigated object in front of the target intersection based on the driving data;
a prediction module 603 configured to predict a yaw behavior of the navigated object based on the navigation direction, the driving characteristics, and a yaw prediction model corresponding to the target intersection;
a guiding module 604 configured to guide the navigated object to travel in a correct direction when yaw behavior of the navigated object is predicted to occur.
In this embodiment, the yaw guiding device may be implemented on a navigation terminal or a navigation server. The navigated object may be any object currently traveling in front of the target intersection, such as a user, a smart driving vehicle, a smart robot, and the like. The target intersection can be any intersection which is trained in advance to obtain a yaw prediction model. Intersections can be defined in the road network data in advance, and usually refer to intersections with branches.
The embodiment can perform yaw prediction for a navigated object traveling in front of a target intersection. In the yaw prediction, the driving data of the navigated object in front of the target intersection can be acquired. The driving data may include, but is not limited to, a real driving trajectory of the navigated object (acquired upon obtaining authorization of the navigated object), navigation data output by the navigation service to the navigated object, and the like.
After the travel data of the navigated object is acquired, the travel characteristics and navigation directions of the navigated object may be extracted based on the travel data. The driving characteristics may include, but are not limited to, the speed of the navigated object during driving in front of the target intersection, the distance from the target intersection, the lateral offset on the road in front of the target intersection, and the like. In the case of a road driving direction being understood as vertical, a lateral offset may be understood as an offset of the navigated object in the lateral direction of the road, e.g. an offset of the navigated object with respect to a certain edge or center line of the road. The navigation direction may be a direction in which the navigated object should travel before the target intersection, that is, a correct travel direction of the navigated object before the target intersection, and the navigation direction may include, but is not limited to, driving out to the right, driving out to the left, going straight, and the like.
In some embodiments, a corresponding yaw prediction model is obtained by training the target intersection in advance, and the yaw prediction model can predict the yaw behavior of an object driving in different navigation directions in front of the target intersection. In some embodiments, different yaw prediction models may be trained for different navigation directions, a corresponding yaw prediction model may be selected based on the navigation direction of the navigated object, and then, using the yaw prediction model, it may be predicted whether yaw behavior exists in the navigated object based on the driving characteristics of the navigated object, for example, the navigation direction of the navigated object should be left right, but the yaw prediction model corresponding to left right determines that the navigated object does not actually go left right or has a low possibility of going right based on the driving characteristics of the navigated object, and at this time, it may be predicted that yaw behavior exists in the navigated object.
In some embodiments, it may be determined whether yaw behavior of the navigated object exists by matching a degree of match between the driving characteristics of the navigated object and the yaw prediction model.
In some embodiments, the driving characteristics may also include, but are not limited to, characteristics of speed and/or lateral offset of the navigated object under different circumstances (such as weather conditions, time of day, road congestion conditions, etc.) and different road conditions in front of the target intersection (such as road grade in front of the target intersection, road structure, number of lanes, navigation segment length, confusion, etc.). In the driving characteristics, characteristics such as the speed and/or lateral deviation of the object to be navigated have a correspondence relationship with the relative distance between the object to be navigated and the target intersection.
In some embodiments, the lateral offset of the navigated object can be determined based on the vertical distance of the navigated object from the road centerline ahead of the target intersection at a preset distance ahead of the target intersection. The side line of the road before the target intersection is known, the center line can be determined based on the coordinates of each point on the side line, and the position of the navigated object at the preset distance can also be determined based on the trajectory data, so the lateral offset of the navigated object can be determined based on the position of the navigated object at the preset distance and the position coordinates of the center line of the road before the target intersection.
In some embodiments, the yaw prediction model may be a fitted curve model between speed and distance, i.e. the yaw prediction model may be a curve model based on the speed of the sample navigation object before the target intersection and the distance between the sample navigation object and the target intersection. Of course, the yaw prediction model may also be a more complex model, for example, a model trained by learning a plurality of characteristics of the speed, the direction angle, the lateral offset, and the like of the sample navigation object at different distances in front of the target intersection, such as a neural network model and the like. The yaw prediction models can be obtained by respectively training based on different road conditions and/or different road environments, namely different yaw prediction models can be correspondingly trained under different road conditions and/or different environments, and corresponding yaw prediction models can be selected for prediction based on the currently predicted road conditions and road environments in the actual prediction process. For example, different yaw prediction models can be obtained by training in different navigation directions for the same target intersection, and different target intersections can also correspond to different yaw prediction models; of course, multiple target intersections with similar or identical road conditions may correspond to the same yaw prediction model in the same navigation direction.
The embodiment of the disclosure aims at a navigated object driven by a target intersection, by acquiring driving data of the navigated object in front of the target intersection, and extracting driving characteristics and a navigation direction of the navigated object in front of the target intersection from the driving data, and then predicting yaw behavior of the navigated object based on the navigation direction, the driving characteristics and a yaw prediction model corresponding to the pre-trained target intersection, and guiding the navigated object to drive into a correct driving direction when the yaw behavior of the navigated object is predicted. By the aid of the method, when the navigated object is subjected to yaw due to self reasons or external influences in the navigation process, the navigated object can be guided to drive into the correct driving direction in time, the yaw probability of the navigated object is reduced, and the situation that the navigated object bypasses is avoided.
In an optional implementation manner of this embodiment, the driving data includes current navigation data of the navigated object in front of the target intersection and generated trajectory data of the navigated object under navigation of the current navigation data; the first extraction module comprises:
a first determination sub-module configured to determine a navigation direction of the sample navigation object in front of the target intersection based on the current navigation data;
a second determination sub-module configured to determine a correspondence between a speed of the sample navigation object and a distance of the sample navigation object relative to the target intersection based on the generated trajectory data.
In this alternative implementation, the yaw prediction is performed during the navigation process of the navigated object, and thus the acquired driving data of the navigated object may include, but is not limited to, the current navigation data output to the navigated object by the navigation server, and may also include trajectory data that has been generated by the navigated object over a period of time. The current navigation data may include, but is not limited to, directions indicating that the navigated object should travel, such as driving out to the right, driving out to the left, or going straight, etc.
The navigated object travels before the target intersection based on the current navigation data, and the trajectory data can be generated during the travel process, wherein the generated trajectory data can include but is not limited to the speed of the navigated object at each trajectory point, the distance between the navigated object and the target intersection at each trajectory point can be determined based on the position information of each trajectory point and the position information of the target intersection, and the corresponding relationship between the speed of the navigated object and the distance between the navigated object and the target intersection can be further determined.
In some embodiments, the velocities at the trajectory points that the navigated object has generated may be obtained at preset time intervals, for example, the velocity at one trajectory point may be obtained every second. To accurately predict the yaw behavior of the navigated object, after obtaining the velocities at the sufficient number of trajectory points, the yaw behavior of the navigated object may be predicted using a yaw prediction model based on the velocities at the sufficient number of trajectory points and the corresponding distances.
In an optional implementation manner of this embodiment, the yaw prediction model includes a speed curve model of speed and distance obtained by pre-fitting training; the prediction module comprises:
a third determination submodule configured to determine yaw behavior of the navigated object based on the correspondence of speed to distance and a degree of match between the speed versus distance speed curve models.
In the optional implementation manner, speed curve models of speeds and distances in different navigation directions can be obtained in advance based on the running characteristic fitting training of the sample navigation object in front of the target intersection, and the curve models are speed curves of the sample navigation object at each preset distance in front of the target intersection. In the actual navigation process, after the correspondence between the speed and the distance of the navigated object in front of the target intersection is determined, the speed curve model of the speed and the distance corresponding to the correct driving direction of the navigated object (i.e. the navigation direction indicated in the current navigation data) can be matched based on the correspondence, if the matching degree is high, the possibility that the navigated object drifts can be considered to be low, and if the matching degree is low, the possibility that the navigated object drifts can be considered to be high. In other embodiments, the speed curve model of speed and distance corresponding to the wrong driving direction of the navigated object (i.e., the navigation direction not indicated in the current navigation data) may also be matched based on the correspondence, and if the degree of matching is high, the navigated object may be considered to have a high possibility of yawing, and if the degree of matching is low, the navigated object may be considered to have a low possibility of yawing. In other embodiments, a first degree of matching of the speed and distance speed curve model corresponding to the correct driving direction and a second degree of matching of the speed and distance speed curve model corresponding to the wrong driving direction may be considered, for example, by weighting the first degree of matching and the second degree of matching to obtain a final degree of matching, and determining the yaw behavior of the navigated behavior based on the final degree of matching.
In an optional implementation manner of this embodiment, the guidance module includes:
the guiding sub-module is configured to guide the navigated object to drive in a correct navigation direction in different guiding modes based on the probability of yaw behavior of the navigated object; the different guiding modes comprise one or more combinations of different sound effects, different animations and different voice playing contents.
In this optional implementation manner, the yaw prediction model may predict the probability of the yaw behavior of the navigated object, and in this embodiment, a differential guidance manner may be adopted, and for different probability values, different guidance manners are adopted. The guidance mode may include, but is not limited to, a combination of one or more of different sound effects, different animations, and different voice playing contents.
Multiple different sound effects, multiple different animations and multiple different voice playing contents can be preset, one or more combinations of one sound effect, one animation and one voice playing content are selected based on the probability obtained by the prediction of the current yaw prediction model, and the current yaw behavior of the navigated object is reminded.
In an optional implementation manner of this embodiment, the first obtaining module includes:
a first obtaining sub-module configured to obtain a predicted relevant position before the target intersection included in the yaw prediction model when the navigated object satisfies a yaw prediction condition;
an acquisition sub-module configured to acquire travel data of the navigated object at preset time intervals based on the predicted relevant locations.
In this optional implementation manner, a yaw prediction condition may be preset, for example, whether a navigation segment in front of a current target intersection is suitable for yaw prediction (in a prediction stage, it may be determined based on similarities of driving features of sample navigation objects in different navigation directions, if driving features of sample navigation objects in different navigation directions in front of a certain target intersection are similar, it is not possible to accurately distinguish whether the driving object in front of the target intersection is yawing, the navigation segment in front of the target intersection is not suitable for yaw prediction, if driving features of sample navigation objects in different navigation directions in front of the target intersection are not similar in some cases, it is possible to accurately distinguish yawing behaviors of the driving object in front of the target intersection based on the dissimilarities, then the navigation segment in front of the target intersection is suitable for yaw prediction), and whether a navigated object is not far enough from the navigation segment of the target intersection (if the distance is too short, enough driving data is collected for prediction).
Under the condition that a yaw prediction condition is met, a predetermined prediction related position corresponding to the target intersection can be obtained, and the prediction related position is determined when a yaw prediction model is trained. In some embodiments, the predicted relevant locations may include, but are not limited to, a start collection location, a start prediction location, and an end prediction location. The start collection position may be a position where collection of travel data can start, the start prediction position may be a position where prediction of yaw behavior can start, and the end prediction position may be a position where prediction needs to be ended. In some embodiments, the predicted relevant location can be expressed in terms of the distance of the navigated object relative to the target intersection, such as how many meters from the target intersection are the start collection location, the start prediction location, or the end prediction location, among others.
Different corresponding prediction related positions in front of the target intersection are defined in different yaw prediction models, when a yaw prediction condition is determined to be needed, the prediction related positions can be obtained firstly, and the driving data of the navigated object are collected at preset time intervals on the basis of the prediction related positions. For example, after the object to be navigated reaches the start collection position, collection of the travel data of the object to be navigated is started, and after the start prediction position, extraction of the travel characteristics based on the previously collected travel data is started, and the yaw behavior of the object to be navigated is predicted based on the yaw prediction model.
Fig. 7 shows a block diagram of a yaw prediction model training apparatus according to an embodiment of the present disclosure. The apparatus may be implemented as part or all of an electronic device through software, hardware, or a combination of both. As shown in fig. 7, the yaw prediction model training apparatus includes:
a second obtaining module 701 configured to obtain driving data of a sample navigation object before a target intersection;
a second extraction module 702 configured to extract travel features of sample navigation objects traveling in different navigation directions in front of the target intersection based on the travel data;
a training module 703 configured to train a yaw prediction model in different navigation directions based on the driving characteristics in different navigation directions, so that the yaw prediction model can predict the yaw behavior of the navigated object in front of the target intersection.
In this embodiment, the yaw prediction model training apparatus may be executed on a navigation server. The sample navigation object may be any object that travels using navigation service navigation in front of a target intersection within a preset time range, and may be, for example, a user, an intelligent driving vehicle, an intelligent robot, or the like. The target intersection may be any intersection that requires training of a yaw prediction model. In some embodiments, in the course of the yaw prediction model training, the driving data may be obtained for a batch of target intersections, for example, the corresponding driving data may be obtained for all intersections of a certain city or a certain area of a certain city, and then the driving data corresponding to each intersection may be grouped.
Each target intersection can obtain the running data of a plurality of sample navigation objects within a preset time range. In some embodiments, the driving data may include, but is not limited to, driving-related data, such as navigation data, trajectory data, and the like, generated by the sample navigation object driving within a preset distance before the target intersection within the preset time range. That is, the driving data may include, but is not limited to, a real driving track of the sample navigation object within a preset distance range before the target intersection by a preset distance (acquired on the premise of obtaining the authorization of the sample navigation object), navigation data output by the navigation service to the sample navigation object, and the like.
After the travel data of the sample navigation object is acquired, the travel characteristics of the sample navigation object may be extracted based on the travel data. When the travel characteristics of the sample navigation object are extracted, the travel characteristics of the sample navigation object traveling in different navigation directions may be distinguished based on the different navigation directions. That is, the sample navigation objects may be grouped according to navigation directions, and then the sample navigation objects in the same navigation direction extract a set of driving features in the navigation direction, and the sample navigation objects in different navigation directions extract different sets of driving features. It is understood that the navigation direction mentioned herein refers to the navigation direction of the navigation data prompt in front of the target intersection.
In some embodiments, the sample navigation objects may be classified into two categories, normal driving and yaw driving, based on the actual driving direction and whether the navigation direction is the same. If the actual driving direction of the sample navigation object is consistent with the navigation direction in the process of navigation driving before the target intersection, the sample navigation object can be considered as a normal driving object, and if the actual driving direction is inconsistent with the navigation direction, the sample navigation object can be considered as a yaw driving object. For example, in the navigation process of the sample navigation object a, the navigation data prompts that the sample navigation object a needs to drive out to the right before passing through the target intersection, but the sample navigation object a does not actually drive out to the right but always runs straight, so that the sample navigation object a does not drive out from the target intersection but misses the target intersection, and at this time, the sample navigation object a can be divided into yaw-driven sample navigation objects; in the navigation process of the sample navigation object B, the navigation data prompts the sample navigation object A to drive out to the right before passing through the target intersection, and the sample navigation object B actually drives out to the right and drives out from the target intersection, so that the sample navigation object B can be divided into normal driving sample navigation objects. The driving characteristics of the sample navigation objects in normal driving and yaw driving can be distinguished for subsequent model training.
In some embodiments, the driving characteristics may include, but are not limited to, speed of the sample navigation object during driving in front of the target intersection, distance from the target intersection, lateral offset on the road in front of the target intersection, and the like. The navigation direction may be a direction in which the sample navigation object should travel when navigating in front of the target intersection, that is, a correct travel direction prompted by the sample navigation object in the navigation data before the target intersection, and the navigation direction may include, but is not limited to, right-side exit, left-side exit, straight travel, and the like.
In this embodiment, the yaw prediction model is trained by the extracted driving characteristics of the sample navigation object, so that the yaw prediction model can predict the yaw behavior of the navigated object driving in different navigation directions in front of the target intersection. In some embodiments, different yaw prediction models may be trained for different navigation directions, so that when performing online prediction, a corresponding yaw prediction model may be selected based on the navigation direction of the sample navigation object, and then, using the yaw prediction model, whether a yaw behavior of the sample navigation object exists is predicted based on the driving characteristics of the navigated object, for example, the navigation direction of the navigated object should be coming out to the right, but the yaw prediction model corresponding to coming out to the right determines that the navigated object does not actually come out to the right based on the driving characteristics of the navigated object, or the probability of coming out to the right is low, and at this time, the existence of the yaw behavior of the navigated object may be predicted.
In some embodiments, the yaw prediction model may include, but is not limited to, a curve model between speed and distance, which may be a curve fit to the speed and distance relative to the target intersection in the driving characteristics of the sample navigation object.
Of course, it is understood that the yaw prediction model may also be a more complex model, such as a model trained by learning multiple characteristics of the speed, direction angle, lateral offset, etc. of the sample navigation object at different distances in front of the target intersection, such as a neural network model, etc. The yaw prediction models can be obtained by respectively training based on different road conditions and/or different road environments, that is, different road conditions and/or different environments can correspondingly train different yaw prediction models, and in the actual prediction process, the corresponding yaw prediction models can be selected for prediction based on the currently predicted road conditions and road environments. For example, different yaw prediction models can be obtained by training in different navigation directions for the same target intersection, and different target intersections can also correspond to different yaw prediction models; of course, multiple target intersections with similar or identical road conditions may correspond to the same yaw prediction model in the same navigation direction. The method is not particularly limited and may be based on practical needs.
In some embodiments, the driving characteristics may also include, but are not limited to, characteristics of speed and/or lateral offset of the navigated object under different circumstances (such as weather conditions, time of day, road congestion conditions, etc.) and different road conditions in front of the target intersection (such as road grade in front of the target intersection, road structure, number of lanes, navigation segment length, confusion, etc.). In the driving characteristics, characteristics such as the speed and/or lateral deviation of the object to be navigated have a correspondence relationship with the relative distance between the object to be navigated and the target intersection. The yaw prediction model can be trained by utilizing the driving characteristics, so that when online prediction is carried out, the yaw prediction model can also predict whether the navigated object has yaw behavior or not through the driving characteristics of the navigated object. It will be appreciated that the driving characteristics are not limited to the speeds, lateral offsets, etc. mentioned above, but may also include other characteristics such as steering angle, yaw rate, etc.
The embodiment of the disclosure aims at each target crossing, by collecting the driving data of the sample navigation object which is guided to drive in front of the target crossing within the preset time range, and extracting the driving characteristics of the sample navigation object which is normally driven and abnormally driven in different navigation directions in front of the target crossing from the driving data, and then training the yaw prediction models in different navigation directions based on the driving characteristics in different navigation directions, the yaw prediction models can predict the yaw behavior of the navigated object in front of the target crossing. By the method, the yaw prediction model can be trained on the basis of the running characteristics of the sample navigation object, so that when the navigated object is subjected to yaw due to self reasons or external influences in the navigation process, the navigated object can be guided to run into the correct running direction in time, the preference probability of the navigated object is reduced, and the situations of detour and the like of the navigated object are avoided.
In an optional implementation manner of this embodiment, the second extracting module includes:
a fourth determination sub-module configured to determine a navigation direction of the sample navigation object before the target intersection based on the travel data;
a fifth determination submodule configured to determine distribution information of the sample navigation object driving characteristics;
a sixth determination submodule configured to determine a travel characteristic of the sample navigation object in the same navigation direction based on the distribution information.
In this optional implementation, the navigation direction of the sample navigation object in front of the target intersection may be determined first through the driving data, and the sample navigation objects are divided into different groups based on the navigation direction, where the same navigation direction corresponds to the same group of sample navigation objects, and different navigation directions correspond to different groups of sample navigation objects. As described above, the navigation direction of the sample navigation object is the correct driving direction of the sample navigation object before navigating through the target intersection, and can be determined by the navigation data output by the navigation service, for example.
After the sample navigation objects are grouped according to the navigation direction, the distribution information of the driving characteristics can be extracted for each group of sample navigation objects, and then the driving characteristics of the group of sample navigation objects are determined based on the distribution information. That is, the driving characteristics of the sample navigation objects in the same navigation direction are a comprehensive expression of the driving characteristics of the set of sample navigation objects.
In some embodiments, for each of each group of sample navigation objects, a corresponding driving feature may be extracted, and then distribution information of the sample navigation objects in the navigation direction is determined through the driving features of the group of sample navigation objects, where the distribution information may be, for example, hotspot distribution information. If the driving characteristic is a speed characteristic, determining the hotspot distribution of the speeds of the sample navigation objects in the same navigation direction at a plurality of preset distances in front of the target intersection according to the hotspot distribution information, and determining the speed characteristic of the group of sample navigation objects based on the hotspot distribution, namely the speed characteristic of the sample navigation objects in the navigation direction. By the method, the driving characteristics of the sample navigation objects in the same navigation direction can be comprehensively considered, so that more accurate driving characteristics capable of representing the sample navigation objects in the navigation direction can be found.
In some embodiments, the driving characteristics may include, but are not limited to, a combination of one or more of a speed, yaw information, and lateral offset of the sample navigation object under different scenarios. Different scenes may include, but are not limited to, scenes formed from combinations of one or more of different road conditions, different environments, and the like. The speed, the lateral offset, the yaw information and the like of the sample navigation object can be obtained through the trajectory data actually generated by the sample navigation object in the navigation process and the navigation data output by the navigation server. The yaw information may include information whether the sample navigation object is yawing while navigating ahead of the target intersection.
In an optional implementation manner of this embodiment, the driving data includes historical navigation data of the sample navigation object before the target intersection; the fourth determination submodule is implemented to:
determining a navigation direction of the sample navigation object in front of the target intersection based on the historical navigation data.
In this alternative implementation, the sample navigation object is an object that navigates through the target intersection within a preset time range, and thus the driving data of the sample navigation object may include, but is not limited to, historical navigation data that the navigation server outputs to the sample navigation object within the preset time range. The historical navigation data may include, but is not limited to, directions indicating that the sample navigation object should travel, such as right-hand drive-out, left-hand drive-out, straight travel, or the like. Therefore, the navigation direction of the sample navigation object before the target intersection can be obtained from the historical navigation data.
In an optional implementation manner of this embodiment, the driving data includes actual track data generated by the sample navigation object before the target intersection under navigation of historical navigation data; the fifth determination submodule includes:
a seventh determination submodule configured to determine, based on the actual trajectory data, a speed and a lateral offset at which the sample navigation object travels at different preset distances before the target sample intersection;
an eighth determining sub-module configured to determine distribution information of the velocities and the lateral offsets at the different preset distances in the same navigation direction based on the velocities and the lateral offsets corresponding to a plurality of the sample navigation objects in the same navigation direction.
In this alternative implementation, as described above, the sample navigation objects include a plurality of objects, and the actual trajectory data of each sample navigation object may be first determined to determine its driving characteristics, such as speed and lateral offset, before the target intersection. The actual trajectory data may include, but is not limited to, position information and speed of the sample navigation object at each trajectory point, and based on the position information of each trajectory point and the position information of the target intersection, the distance between the navigated object and the target intersection at each trajectory point may be determined, and then the correspondence between the speed of the sample navigation object and the distance of the sample navigation object relative to the target intersection may be determined.
For a plurality of sample navigation objects in the same navigation direction, determining the distribution information of the sample navigation objects in the navigation direction based on the respective corresponding speed, lateral deviation and other driving characteristics; the distribution information may be, for example, hotspot distribution information.
In an optional implementation manner of this embodiment, the driving characteristics include speeds and lateral offsets of the sample navigation object at different preset distances during the driving process before the target intersection; the training module comprises:
a first fitting sub-module configured to obtain speed curve models of speed and distance in different navigation directions based on the speed fitting of the sample navigation object in different navigation directions at different preset distances in front of the target intersection;
a second fitting submodule configured to fit to obtain displacement curve models of lateral offset and distance in different navigation directions based on the lateral offset of the sample navigation object in different navigation directions at different preset distances in front of the target intersection;
a ninth determination submodule configured to determine a predicted relevant position ahead of the target intersection in different navigation directions based on the speed curve model and the displacement curve model; when the yaw behavior of the navigated object is predicted, the driving data of the navigated object is collected based on the prediction related position, and the yaw behavior is predicted.
In this optional implementation manner, the yaw prediction models can be trained respectively for different navigation directions in front of the target intersection, the same navigation direction of the same target intersection can correspond to the same yaw prediction model, and different navigation directions of the same target intersection can correspond to different yaw prediction models.
In some embodiments, during the yaw prediction model training process, driving characteristics can be extracted based on the driving data of the sample navigation data, which may include, but are not limited to, the speed and lateral offset of the sample navigation object at different preset distances during the navigation process in the target intersection. The different preset distances refer to the distances of the sample navigation object relative to the target intersection. In some embodiments, the driving data within a preset distance range, such as 1 kilometer, relative to the target intersection during a day may be collected, and the preset distance may be a position within the preset distance range at intervals, for example, a preset distance is set every 20 meters from 1 kilometer. In this way, a plurality of speeds at a plurality of preset distances and lateral offset information can be extracted from the travel data.
For the speeds of the plurality of sample navigation objects in the same navigation direction at the preset distances, the final speed at each preset distance may be determined based on the speed distribution information, and the final speed at each preset distance may be the speed at the preset distance that is found by the speed distribution information of each sample navigation object at the preset distance and most represents the majority of sample navigation objects. The velocity and lateral offset at each preset distance in all navigation directions can be obtained in the same way.
And fitting to obtain a speed curve model in the navigation direction based on the speed of each preset distance in the same navigation direction, and fitting to obtain a displacement curve model in the navigation direction based on the transverse deviation of each preset distance in the same direction.
Further, when the yaw behavior is predicted online, it is possible to start collecting travel data from a prediction-related position set in advance and start predicting the yaw behavior. The predicted relevant positions can be determined based on a speed curve model and a displacement curve model when a yaw prediction model is trained, and different navigation directions can correspond to different predicted relevant positions.
In some embodiments, the predicted relevant location may include, but is not limited to, a start collection location, a start prediction location, and an end prediction location. The start collection position may be a position where collection of travel data can start, the start prediction position may be a position where prediction of yaw behavior can start, and the end prediction position may be a position where prediction needs to be ended. In some embodiments, the predicted relevant location may be expressed in terms of the distance of the sample navigation object relative to the target intersection, e.g., how many meters from the target intersection are the start collection location, the start predicted location, or the end predicted location, etc.
In some embodiments, the predicted relevant position ahead of the target intersection may be defined in a yaw prediction model that does not navigate the direction. In predicting yaw behavior, travel data for the navigated object can be acquired based on the predicted relative position. For example, after the target object reaches the start collection position, collection of the travel data of the target object is started, and after the start prediction position is reached, extraction of the travel characteristics based on the travel data collected before is started, and the yaw behavior of the target object is predicted based on the yaw prediction model.
The lateral offset of the sample navigation object may be determined based on a vertical distance of the sample navigation object from a center line of a road ahead of the target intersection at the preset distance. The edge line of the road before the target intersection is known, the center line can be determined based on the coordinates of each point on the edge line, and the position of the sample navigation object at the preset distance can also be determined based on the track data, so that the lateral offset can be determined based on the position of the sample navigation object at the preset distance and the position coordinates of the center line of the road before the target intersection.
In an optional implementation manner of this embodiment, the ninth determining sub-module includes:
a tenth determination submodule configured to determine velocity similarity curves between the velocity curve models in different navigation directions and to determine displacement similarity curves between the displacement curve models in different navigation directions;
an eleventh determination sub-module configured to determine a first predicted location based on an inflection point on the velocity similarity curve and a second predicted location based on an inflection point on the displacement similarity curve;
a twelfth determination submodule configured to determine a prediction-related position based on the first predicted position and the second predicted position.
In the optional implementation mode, corresponding speed curve models and displacement curve models can be trained for the target intersection in different navigation directions. In order to compare the characteristics of sample navigation objects traveling in different navigation directions in both the speed and lateral offset dimensions, a determination may be made by comparing the similarity between the speed curve models in the different navigation directions and the similarity between the displacement curve models in the different navigation directions.
For example, if the different navigation directions include straight driving and right-driving out, a speed similarity curve between a speed curve model corresponding to the straight driving and a speed curve model corresponding to the right-driving out can be determined, and then the difference in speed between the straight driving and the right-driving out of the two types of navigation objects can be determined based on the speed similarity curve, and similarly, a displacement similarity curve can be obtained, and then the difference in lateral offset between the straight driving and the right-driving out of the two types of navigation objects can be compared based on the displacement similarity curve.
Based on the inflection points on the speed similarity curve and the displacement similarity curve, the navigation objects in different navigation directions can be understood as position nodes which are changed from similar to dissimilar in driving characteristics. Thus, the predicted relevant position may be determined based on the inflection points on the velocity similarity curve and the displacement similarity curve.
According to the first predicted position and the second predicted position, a predicted reference position can be comprehensively calculated, for example, the predicted reference position can be selected from the first word position and the second predicted position which are farther away relative to the target intersection as the predicted reference position.
In some embodiments, the predicted relevant location may include, but is not limited to, a start predicted location, a start collection location, and/or a stop predicted location.
After the prediction reference position is determined, the prediction-related position may be determined based on the prediction reference position.
For example, the start prediction position is a position where the prediction reference position is increased by a certain distance, that is, the start prediction position is a certain distance farther from the target intersection than the prediction reference position, and the purpose is to start prediction several seconds earlier. The starting position may be a position which is increased by a certain distance from the starting position to be predicted, in order to collect the driving data a few seconds before the starting position to collect enough driving data for extracting the driving characteristics when the starting position to be predicted is collected, and then the yaw prediction model is used for prediction. The ending prediction location may be set at a preset distance from the target intersection, such as 200 meters from the target intersection.
In an optional implementation manner of this embodiment, the driving characteristics further include historical yaw information of the sample navigation object; the device further comprises:
a first determination module configured to determine a velocity similarity curve between the velocity profile models in different navigation directions;
a second determination module configured to determine whether the target intersection is suitable for prediction of yaw behavior based on the speed similarity curve, the displacement curve model, and the historical yaw information.
In the optional implementation mode, after the speed curve models are obtained in the model training process, the similarity between the speed curve models in different navigation directions can be calculated, and if the different speed curve models are similar, the navigated object in front of the target intersection can be difficult to distinguish the actual driving direction in front of the target intersection by adopting the speed curve models; at this time, the displacement curve model and the historical yaw information can be further adopted for further confirmation, if the displacement curve models of the navigated object in different navigation directions are similar and/or the historical yaw rate of the navigated object in front of the target intersection is low, the target intersection can be considered to be not suitable for the prediction of yaw behaviors, and the target intersection can be marked as an intersection without yaw prediction. When the navigated object actually uses the navigation service, the yaw behavior can not be predicted when the navigated object passes through the target intersection; and for the target crossing which does not meet the condition, the target crossing can be marked to be suitable for the prediction of the yawing behavior, and when the navigated object actually uses the navigation service, the yawing behavior can be predicted when the navigated object passes through the target crossing. The historical yaw information of the sample navigation object may be a comprehensive expression of the historical yaw information of the plurality of sample navigation objects before the target intersection, for example, an average yaw rate of the plurality of sample navigation objects.
FIG. 8 is a schematic block diagram of an electronic device suitable for use in implementing a yaw guidance method and/or a yaw prediction model training method according to an embodiment of the present disclosure.
As shown in fig. 8, electronic device 800 includes a processing unit 801, which may be implemented as a CPU, GPU, FPGA, NPU, or like processing unit. The processing unit 801 may execute various processes in the embodiment of any one of the above-described methods of the present disclosure according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data necessary for the operation of the electronic apparatus 800 are also stored. The processing unit 801, ROM802, and RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including components such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as needed. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that the computer program read out therefrom is mounted on the storage section 808 as necessary.
In particular, according to embodiments of the present disclosure, any of the methods described above with reference to embodiments of the present disclosure may be implemented as a computer software program. For example, embodiments of the present disclosure include a computer program product comprising a computer program tangibly embodied on a machine-readable medium, the computer program comprising program code for performing any of the methods of the embodiments of the present disclosure. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section 809 and/or installed from the removable medium 811.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units or modules described in the embodiments of the present disclosure may be implemented by software or hardware. The units or modules described may also be provided in a processor, and the names of the units or modules do not in some cases constitute a limitation of the units or modules themselves.
As another aspect, the present disclosure also provides a computer-readable storage medium, which may be the computer-readable storage medium included in the apparatus in the above-described embodiment; or it may be a separate computer readable storage medium not incorporated into the device. The computer readable storage medium stores one or more programs for use by one or more processors in performing the methods described in the present disclosure.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.

Claims (15)

1. A yaw guidance method, comprising:
acquiring running data of a navigated object in front of a target intersection;
extracting a driving feature and a navigation direction of the navigated object in front of the target intersection based on the driving data;
predicting the yaw behavior of the navigated object based on the navigation direction, the driving characteristics and a yaw prediction model corresponding to the target intersection;
and guiding the navigated object to drive in the correct direction when the yaw behavior of the navigated object is predicted.
2. The method of claim 1, wherein the travel data includes current navigation data of the navigated object ahead of the target intersection and generated trajectory data of the navigated object under navigation of the current navigation data; extracting the driving characteristics and the navigation direction of the navigated object in front of the target intersection based on the driving data, including:
determining a navigation direction of the sample navigation object in front of the target intersection based on the current navigation data;
determining a correspondence between a speed of the sample navigation object and a distance of the sample navigation object relative to the target intersection based on the generated trajectory data.
3. The method of claim 2, wherein the yaw prediction model comprises a pre-fit trained speed versus distance speed curve model; predicting the yaw behavior of the navigated object based on the navigation direction, the driving characteristics and a yaw prediction model corresponding to the target intersection, including:
determining yaw behavior of the navigated object based on the correspondence of speed to distance and a degree of match between speed profile models of the speed to distance.
4. The method according to any one of claims 1-3, wherein guiding the navigated object to travel in a correct direction when yaw behavior of the navigated object is predicted comprises:
guiding the navigated object to run in a correct navigation direction by adopting different guiding modes based on the probability of the navigated object to generate a yaw behavior; the different guide modes comprise one or more of different sound effects, different animations and different voice playing contents.
5. The method according to any one of claims 1-3, wherein acquiring the travel data of the navigated object in front of the target intersection comprises:
when the navigated object meets a yaw prediction condition, acquiring a prediction related position in front of the target intersection, wherein the prediction related position is included in the yaw prediction model;
acquiring driving data of the navigated object at preset time intervals based on the predicted relevant positions.
6. A yaw prediction model training method, comprising:
acquiring running data of a sample navigation object in front of a target intersection;
extracting, based on the travel data, travel characteristics of a sample navigation object traveling in different navigation directions in front of the target intersection;
training yaw prediction models in different navigation directions based on the driving characteristics in the different navigation directions so that the yaw prediction models can predict yaw behaviors of the navigated object in front of the target intersection.
7. The method of claim 6, wherein extracting travel characteristics of sample navigation objects traveling in different navigation directions ahead of the target intersection based on the travel data comprises:
determining a navigation direction of the sample navigation object in front of the target intersection based on the driving data;
determining distribution information of the sample navigation object driving characteristics;
determining a driving characteristic of the sample navigation object in the same navigation direction based on the distribution information.
8. The method of claim 7, wherein said travel data comprises actual trajectory data generated by said sample navigation object in front of said target intersection under navigation by historical navigation data; determining distribution information of the sample navigation object driving characteristics, comprising:
determining a speed and a lateral offset of the sample navigation object when the sample navigation object travels at different preset distances in front of the target sample intersection based on the actual trajectory data;
determining distribution information of the speeds and the lateral offsets at the different preset distances in the same navigation direction based on the speeds and the lateral offsets corresponding to the plurality of sample navigation objects in the same navigation direction.
9. The method of any of claims 6-8, wherein the driving characteristics include a speed and a lateral offset of the sample navigation object at different preset distances during driving in front of the target intersection; training yaw prediction models in different navigation directions based on the driving characteristics in the different navigation directions so that the yaw prediction models can predict yaw behaviors of the navigated object in front of the target intersection, comprising:
fitting based on the speeds of the sample navigation object at different preset distances in front of the target intersection in different navigation directions to obtain speed curve models of the speeds and the distances in different navigation directions;
fitting based on the lateral deviation of the sample navigation object in different navigation directions at different preset distances in front of the target intersection to obtain displacement curve models of the lateral deviation and the distance in different navigation directions;
determining predicted relevant positions in front of the target intersection in different navigation directions based on the speed curve model and the displacement curve model; when the yaw behavior of the navigated object is predicted, the driving data of the navigated object is collected based on the prediction-related position, and the yaw behavior is predicted.
10. The method of claim 9, wherein determining a predicted relevant position ahead of the target intersection in different navigation directions based on the speed curve model and displacement curve model comprises:
determining speed similarity curves between the speed curve models in different navigation directions and determining displacement similarity curves between the displacement curve models in different navigation directions;
determining a first predicted position based on an inflection point on the velocity similarity curve and a second predicted position based on an inflection point on the displacement similarity curve;
determining a predicted relevant location based on the first predicted location and the second predicted location.
11. The method according to any one of claims 6-8, 10, wherein the driving characteristics further include historical yaw information of the sample navigation object; the method further comprises the following steps:
determining a speed similarity curve between the speed curve models in different navigation directions;
determining whether the target intersection is suitable for prediction of yaw behavior based on the speed similarity curve, the displacement curve model, and the historical yaw information.
12. A yaw guide apparatus, comprising:
the navigation device comprises a first acquisition module, a second acquisition module and a navigation module, wherein the first acquisition module is configured to acquire the driving data of a navigated object in front of a target intersection;
a first extraction module configured to extract a travel feature and a navigation direction of the navigated object in front of the target intersection based on the travel data;
a prediction module configured to predict yaw behavior of the navigated object based on the navigation direction, the driving characteristics, and a yaw prediction model corresponding to the target intersection;
and the guiding module is configured to guide the navigated object to drive in a correct direction when the yaw behavior of the navigated object is predicted.
13. A yaw prediction model training apparatus, comprising:
the second acquisition module is configured to acquire the driving data of the sample navigation object before the target intersection;
a second extraction module configured to extract travel features of sample navigation objects traveling in different navigation directions ahead of the target intersection based on the travel data;
a training module configured to train yaw prediction models in different navigation directions based on the driving characteristics in the different navigation directions so that the yaw prediction models can predict yaw behavior of the navigated object in front of the target intersection.
14. An electronic device comprising a memory, a processor, and a computer program stored on the memory, wherein the processor executes the computer program to implement the method of any of claims 1-11.
15. A computer program product comprising computer instructions, wherein the computer instructions, when executed by a processor, implement the method of any one of claims 1-11.
CN202211165806.7A 2022-09-23 2022-09-23 Yaw guiding method, device, electronic equipment and computer program product Pending CN115585820A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211165806.7A CN115585820A (en) 2022-09-23 2022-09-23 Yaw guiding method, device, electronic equipment and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211165806.7A CN115585820A (en) 2022-09-23 2022-09-23 Yaw guiding method, device, electronic equipment and computer program product

Publications (1)

Publication Number Publication Date
CN115585820A true CN115585820A (en) 2023-01-10

Family

ID=84778224

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211165806.7A Pending CN115585820A (en) 2022-09-23 2022-09-23 Yaw guiding method, device, electronic equipment and computer program product

Country Status (1)

Country Link
CN (1) CN115585820A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117705141A (en) * 2024-02-06 2024-03-15 腾讯科技(深圳)有限公司 Yaw recognition method, yaw recognition device, computer readable medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117705141A (en) * 2024-02-06 2024-03-15 腾讯科技(深圳)有限公司 Yaw recognition method, yaw recognition device, computer readable medium and electronic equipment
CN117705141B (en) * 2024-02-06 2024-05-07 腾讯科技(深圳)有限公司 Yaw recognition method, yaw recognition device, computer readable medium and electronic equipment

Similar Documents

Publication Publication Date Title
CN106114507B (en) Local path planning method and device for intelligent vehicle
CN101097152B (en) Navigation apparatuses
CN101097153B (en) Navigation apparatuses
CN101334287B (en) Vehicle-position-recognition apparatus and vehicle-position-recognition method
US20170343374A1 (en) Vehicle navigation method and apparatus
CN102208035B (en) Image processing system and position measuring system
CN106767873A (en) A kind of map-matching method based on space-time
CN113340318A (en) Vehicle navigation method, device, electronic equipment and storage medium
CN109345015B (en) Method and device for selecting route
CN111401255A (en) Method and device for identifying divergent intersection
CN115585820A (en) Yaw guiding method, device, electronic equipment and computer program product
CN108398701A (en) Vehicle positioning method and device
CN114475656B (en) Travel track prediction method, apparatus, electronic device and storage medium
CN108286973B (en) Running data verification method and device and hybrid navigation system
CN116295496A (en) Automatic driving vehicle path planning method, device, equipment and medium
CN112644487B (en) Automatic driving method and device
JP4923692B2 (en) Route guidance device
CN116088538B (en) Vehicle track information generation method, device, equipment and computer readable medium
CN114906154B (en) Method, system, electronic device and storage medium for judging vehicle driving road category
CN113720341B (en) Vehicle travel route generation method, system, computer device, and storage medium
CN114056337B (en) Method, device and computer program product for predicting vehicle running behavior
CN116524454A (en) Object tracking device, object tracking method, and storage medium
CN114822050A (en) Road condition identification method, electronic equipment and computer program product
CN114179805A (en) Driving direction determining method, device, equipment and storage medium
CN113762030A (en) Data processing method and device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination