CN113034958A - Route selection method and device - Google Patents

Route selection method and device Download PDF

Info

Publication number
CN113034958A
CN113034958A CN202110231278.XA CN202110231278A CN113034958A CN 113034958 A CN113034958 A CN 113034958A CN 202110231278 A CN202110231278 A CN 202110231278A CN 113034958 A CN113034958 A CN 113034958A
Authority
CN
China
Prior art keywords
route
target
data
determining
state information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110231278.XA
Other languages
Chinese (zh)
Other versions
CN113034958B (en
Inventor
马虎
王雪初
和卫民
雷琴辉
刘俊峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
iFlytek Co Ltd
Original Assignee
iFlytek Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by iFlytek Co Ltd filed Critical iFlytek Co Ltd
Priority to CN202110231278.XA priority Critical patent/CN113034958B/en
Publication of CN113034958A publication Critical patent/CN113034958A/en
Application granted granted Critical
Publication of CN113034958B publication Critical patent/CN113034958B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/09Arrangements for giving variable traffic instructions
    • G08G1/0962Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
    • G08G1/0968Systems involving transmission of navigation instructions to the vehicle
    • G08G1/096833Systems involving transmission of navigation instructions to the vehicle where different aspects are considered when computing the route

Landscapes

  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention provides a route selection method and a device, wherein the route selection method comprises the following steps: acquiring state information of people in the vehicle and parameter information corresponding to the candidate route; fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route; and determining a target route based on the fusion result. According to the route selection method and device provided by the invention, the state information of the personnel in the vehicle and the parameter information corresponding to the candidate route are fused, the target route is determined from the candidate route based on the fusion result, the planned route is more suitable for the physical and mental states of the personnel in the vehicle, and the driving comfort and safety can be improved.

Description

Route selection method and device
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a route selection method and a route selection device.
Background
With the development of artificial intelligence technology, automatic driving technology and auxiliary driving technology are gradually popularized in daily life of people, the core link of the automatic driving technology and the auxiliary driving technology is the planning of vehicle driving routes, and how to obtain a proper route is an important proposition of the automatic driving technology and the auxiliary driving technology.
At present, according to a technical scheme for route planning, a plurality of alternative routes are often given according to a starting point and a terminal point input by a user and by combining physical parameters such as vehicle types, road restrictions, weather conditions and the like, and a user selects a preferred route from the alternative routes, however, the comfort level of the route planned by the current scheme is poor, and the safety factor is low.
Disclosure of Invention
The invention provides a route selection method and a route selection device, which are used for overcoming the defects of poor comfort level and low safety factor of a planned route in the prior art, realizing that the planned route is more suitable for the physical and mental states of people in a vehicle, and improving the driving comfort and safety.
The invention provides a route selection method, which comprises the following steps: acquiring state information of people in the vehicle and parameter information corresponding to the candidate route; fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route; and determining a target route based on the fusion result.
According to the route selection method provided by the invention, the step of acquiring the state information of people in the vehicle comprises the following steps: acquiring at least one of audio data and image data of people in the vehicle; determining the status information based on at least one of the audio data and the image data.
According to the route selection method provided by the invention, the step of acquiring the state information of people in the vehicle further comprises the following steps: acquiring target limit data; the determining the state information based on at least one of the audio data and the image data includes: determining the status information based on the target limit data and at least one of the audio data and the image data.
According to a route selection method provided by the present invention, the determining the status information based on the target restriction data and at least one of the audio data and the image data includes: performing frequency domain vectorization processing on the audio data to obtain audio characteristics, and/or performing time sequence vectorization processing on the image data to obtain image characteristics; performing coding vectorization processing on the target limiting data to obtain target limiting characteristics; determining the state information based on the target limiting feature and at least one of the audio feature and the image feature.
According to a routing method provided by the present invention, the determining the state information based on the target restriction feature and at least one of the audio feature and the image feature includes: fusing at least one of the audio features and the image features with the target limiting features to obtain target fusion features; obtaining coded data based on the initial marking characteristic and the target fusion characteristic; and taking the first node information of the coded data as the state information.
According to a route selection method provided by the present invention, the fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route includes: and splicing the state information and the parameter information to obtain the fusion result.
According to a route selection method provided by the invention, the determining a target route based on the fusion result comprises the following steps: determining scoring information for the candidate route based on the fusion result; determining a target route from the candidate routes based on the scoring information.
The present invention also provides a route selection device, comprising: the acquisition module is used for acquiring the state information of people in the vehicle and the parameter information corresponding to the candidate route; the fusion module is used for fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route; and the determining module is used for determining a target route based on the fusion result.
The invention also provides an electronic device, comprising a memory, a processor and a computer program stored on the memory and operable on the processor, wherein the processor implements the steps of any of the routing methods described above when executing the computer program.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the routing methods described above.
According to the embodiment of the invention, the state information of the personnel in the vehicle can be acquired from the interactive information input by the personnel in the vehicle, can also be acquired from the human body parameters monitored by the sensor in the vehicle, and can also be extracted from the voice or video monitored by the electronic equipment in the vehicle, and the state information of the personnel in the vehicle can be used for reflecting the human body states of the personnel in the vehicle, such as the physical condition, the age condition, the emotional condition and the like.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a routing method provided by the present invention;
FIG. 2 is a flow chart illustrating step 110 of the routing method provided by the present invention;
FIG. 3 is a schematic flow chart of step 130 of the routing method provided by the present invention;
FIG. 4 is one of the schematic algorithmic diagrams of the routing method provided by the present invention;
FIG. 5 is a second schematic diagram of the algorithm of the route selection method provided by the present invention;
FIG. 6 is a third schematic diagram of the algorithm of the route selection method provided by the present invention;
FIG. 7 is a schematic diagram of a routing device according to the present invention;
FIG. 8 is a schematic diagram of an acquisition module of the routing device according to the present invention;
FIG. 9 is a schematic diagram of a determining module of the routing device provided by the present invention;
fig. 10 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The routing method and apparatus of the present invention are described below with reference to fig. 1-10.
As shown in fig. 1, the present invention provides a route selection method, including: as follows from step 110 to step 130.
And step 110, acquiring state information of people in the vehicle and parameter information corresponding to the candidate route.
It can be understood that, during the vehicle driving process, a route planning is required, at present, a route is mainly planned directly according to parameter information of the route, and a user selects from planned candidate routes by himself/herself, for example, a planning rule such as distance priority, time shortest, congestion avoidance, and high speed priority may be adopted to give options for the user to select, so as to determine path parameter information for planning, which may include: real-time parameters of vehicles or roads, such as road condition information, traffic flow information, time information, distance information, traffic light number information and the like.
For example, the process of determining the candidate route may employ a scheme of obtaining the start position and the end position, determining a navigation area according to the starting position and the end position, and acquiring traffic light information corresponding to each intersection in the navigation area, and determining the shortest target route when the starting position reaches the end position according to the traffic light information and the current running speed corresponding to each intersection, the real-time traffic light information of each crossing in the navigation area is combined in the planning process of the navigation route, the actual driving condition of the user is simulated as much as possible, the shortest route when the user reaches each crossing can be accurately calculated, and a navigation route can be recommended to the user with the travel time of the route as a reference by generating a candidate route from the shortest route for reaching the end position from the start position and notifying the user of the candidate route.
According to the starting point and the end point input by the user, a plurality of candidate routes can be planned by combining the pre-stored map data, and each candidate route corresponds to the parameter information.
However, in the current route planning method, the actual conditions of people in the vehicle are not considered, for example, people with special physique such as pregnant women, old people, children or patients may take in the vehicle, if the planned route is congested or the road is not stable, the vehicle may stop or go and may not stop during the driving process, and the vehicle may not stop or jolt, so that the comfort of the people in the vehicle is extremely poor, and the vehicle is not good for the health of passengers with special physique, and the technical defect exists.
According to the embodiment of the invention, the state information of the personnel in the vehicle can be acquired from the interactive information input by the personnel in the vehicle, can also be acquired from the human body parameters monitored by the sensor in the vehicle, and can also be extracted from the voice or video monitored by the electronic equipment in the vehicle, and the state information of the personnel in the vehicle can be used for reflecting the human body states of the personnel in the vehicle, such as the physical condition, the age condition, the emotional condition and the like.
And step 120, fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route.
It can be understood that the number of candidate routes may be multiple, the parameter information corresponding to each candidate route and the state information of the vehicle interior personnel acquired in the above steps are fused, the corresponding and matched features may be extracted from the state information according to the parameter information, and the fusion is performed from the feature level to obtain fusion results, where each fusion result corresponds to one candidate route.
That is to say, each candidate route corresponds to one fusion result, and the fusion result can reflect real-time parameters of vehicles or roads, such as road condition information, traffic flow information, time information, distance information, traffic light number information and the like, and physical and mental conditions of people in the vehicle, such as physical conditions, age conditions, emotional conditions and the like.
And step 130, determining a target route based on the fusion result.
It is to be understood that, since each candidate route corresponds to the fusion result, the candidate routes may be evaluated according to the fusion result, for example, the level of each candidate route may be determined according to the fusion result, the candidate routes may be respectively assigned with a high level, a medium level and a low level according to the comfort level, or the candidate routes may be ranked according to the first level, the second level and the third level … …. The score of each candidate route may also be determined according to the fusion result, and the candidate routes may be scored according to the comfort level, for example, the score may be 0-100, and the score of a certain candidate route may be 85.
The high-level route in the candidate route can be used as the target route, if the high-level candidate route has a plurality of routes, the target routes can be presented to the user and selected by the user from the target routes, and the route with the highest level in the candidate route can be directly used as the target route.
The highest scoring route of the candidate routes may also be used as the target route.
The unique target route may be presented directly to the user or the vehicle may be controlled to drive automatically directly according to the unique target route.
According to the route selection method provided by the invention, the state information of the personnel in the vehicle and the parameter information corresponding to the candidate route are fused, the target route is determined from the candidate route based on the fusion result, the planned route is more suitable for the physical and mental states of the personnel in the vehicle, and the driving comfort and the safety can be improved.
As shown in fig. 2, in some embodiments, the step 110 of acquiring the status information of the vehicle occupant includes: step 111 to step 112 as follows.
And step 111, acquiring at least one of audio data and image data of people in the vehicle.
It is understood that a microphone may be installed in the vehicle, the microphone may be capable of collecting sound signals in the vehicle, such as conversation sounds of persons in the vehicle, and the sound signals collected by the microphone may be transmitted to a controller of the vehicle, and audio data of the persons in the vehicle may be formed in the controller.
The camera can be further installed in the vehicle and can shoot images or videos in the vehicle, the videos are the images shot according to a certain frame rate, the camera can shoot images of the posture, expression and movement of people in the vehicle, and the controller can acquire image data of the people in the vehicle from the camera.
Here, the controller may acquire both the audio data and the image data, or may acquire only one of the audio data and the image data.
Step 112, determining status information based on at least one of the audio data and the image data.
Here, the status information of the vehicle occupant may be extracted from the audio data, for example, the audio data may include a dialog of the occupant: 'I feel sleepy and want to sleep', people in the vehicle can be confirmed to be sleepy and want to have a rest through voice recognition, and the people in the vehicle can be determined to be in a sleepy state.
Certainly, the state information of the people in the vehicle can also be extracted from the image data, for example, the image data can include a face image of the people in the vehicle, and the age of the people in the vehicle can be obtained through face recognition, for example, a person in the vehicle has an old person who is 70 years old, and here, it can be determined that the person in the vehicle has the old person.
Of course, if the controller acquires both the audio data and the image data, the audio data and the image data can be fused to obtain audio-video fusion data, and state information of more people in the vehicle can be extracted from the audio-video fusion data.
The state information of the persons in the vehicle can be obtained through the audio data and/or the video data, so that the fusion result contains more characteristics related to the persons in the vehicle, the obtained target route can better accord with the physical and mental states of the persons in the vehicle, and the accuracy of determining the target route can be improved.
In some embodiments, the step 110 of acquiring the status information of the vehicle interior personnel further includes: target limit data is acquired.
The target limit data is used for correcting the audio data and the image data, and the target limit data can be text data, key information extracted from conversation of people in the vehicle before the vehicle starts driving or in the vehicle driving process. The target limit data may be manually input by a user or may be voice input by the user, and the controller obtains the text data based on the manual input and the voice input by the user.
For example, before the vehicle starts driving, the driver manually inputs "go through the avenue," which is a defined condition for route planning.
For example, when a person in the vehicle speaks to "refuel the vehicle when passing through a gas station a" during the driving of the vehicle, this is necessary information of the passing point.
The step 112 of determining the state information based on at least one of the audio data and the image data includes: the state information is determined based on the target limit data and at least one of the audio data and the image data.
It is to be understood that the status information may be derived from the audio data and the target limit data, may be derived from the image data and the target limit data, or may be derived collectively from the audio data, the image data, and the target limit data.
The target limit data may be modified from audio data or video data, that is, the target limit data is prioritized over audio data or video data that is a record of the conditions in the vehicle during driving, and the degree of association of the audio data and the video data with the driving of the vehicle is weaker than the degree of association of the target limit data with the driving of the vehicle.
The audio data and the video data are corrected by adopting the target limiting data, so that the planned target route can be ensured to better meet the actual requirements of users, and key driving tasks are not missed.
As shown in fig. 4, in some embodiments, determining the status information based on the target limit data and at least one of the audio data and the image data includes:
and carrying out frequency domain vectorization processing on the audio data to obtain audio characteristics, and/or carrying out time sequence vectorization processing on the image data to obtain image characteristics.
Can be applied to audio data audioiPerforming frequency domain vectorization, that is, performing vectorization by using frequency domain characteristics, modeling by using a Long Short-Term Memory (LSTM), and performing audio data audioiInput into LSTM, output audio characteristic Ai
Can be used for image data videoiThe time sequence vectorization processing can be carried out by firstly carrying out the video processing on the image dataiPerforming coding dimension reduction through CNN (convolutional neural network), performing time series modeling through LSTM, taking the last node of the LSTM output layer, and outputting image characteristics Vi
And carrying out coding vectorization processing on the target limiting data to obtain target limiting characteristics.
The target limitation data may be text data LiAudio data audioiCan be compared with text data LiCorrespondingly, image data videoiOr may be associated with the text data LiAnd correspondingly.
The text data L can be converted intoiDirectly coding through a BERT Input Embedding layer to obtain a target limit characteristic EiTarget limit feature EiThe method can be used for text characteristics, namely a coding vectorization processing process, and the coding vectorization processing process can be obtained by performing text information table lookup according to model training parameters.
The state information is determined based on the target limit feature and at least one of the audio feature and the image feature.
The audio feature and the target restriction feature may be combined to obtain the state information, the image feature and the target restriction feature may be combined to obtain the state information, and the audio feature, the image feature and the target restriction feature may be combined to obtain the state information.
In some embodiments, determining the state information based on the target limiting feature and at least one of the audio feature and the image feature comprises:
fusing at least one of the audio features and the image features with the target limiting features to obtain target fusion features;
the target may be restricted to a characteristic EiAudio feature AiAnd image feature ViAs input of the Multi-Modal Merge Gate, the target fusion feature is output
Figure BDA0002958309790000091
As shown in FIG. 5, in some embodiments, the target limit feature E may be first definediAudio feature AiAnd image feature ViFusing by using an Attention Gate algorithm to obtain a reference fusion characteristic
Figure BDA0002958309790000092
The formula of the Attention Gate algorithm can be
Figure BDA0002958309790000101
Figure BDA0002958309790000102
Figure BDA0002958309790000103
Wherein E isiFor the target limiting feature, AiFor audio features, ViFor image features, other parameters are trainable parameters,
Figure BDA0002958309790000104
in order to be a gate function of the image characteristics,
Figure BDA0002958309790000105
is an audio feature gating function, whose function is the selection and filtering of image features and audio features.
The reference is then fused with the feature
Figure BDA0002958309790000106
And a target limit feature EiFusing through a Merge algorithm to obtain target fusion characteristics
Figure BDA0002958309790000107
The formula of the Merge algorithm may be:
Figure BDA0002958309790000108
Figure BDA0002958309790000109
wherein both alpha and beta are hyperparameters.
Obtaining coded data based on the initial marking characteristic and the target fusion characteristic; the first node information of the encoded data is used as the state information.
It will be appreciated that the initial marking feature ECLSThe initial mark feature E can be obtained according to an artificially constructed special initial mark CLS which can be a constant, and the CLS can be directly coded through a BERT Input Embedding layer to obtain the initial mark feature ECLS
The initial mark may be characterized as ECLSAnd object fusion features
Figure BDA00029583097900001010
Performing characteristic coding by a BERT Encoder to obtain coded data T1 mTaking the coded data T1 mThe first node information C, that is, the first node information C of the BERT Encoder model output layer, can obtain global information, that is, the overall characteristics of a text, an image and an audio because BERT Encoder has a bidirectional sequence modeling characteristic, and uses C as the global characteristic vector obtained after modeling of the whole multi-modal characteristic sequence as the state information of the personnel in the vehicle.
It should be noted that the BERT may be replaced by sequence modeling models such as LSTM and GPT (genetic Pre-Training), and this embodiment is not limited to this specific example.
In some embodiments, the fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route includes: and splicing the state information and the parameter information to obtain a fusion result.
It can be understood that the road condition data R of the candidate routeiThe feature information of each candidate route is included, such as: real-time route information such as road condition information, traffic flow information, time information, distance information and traffic light number information, and road condition data RiThe multiple characteristics are subjected to characteristic coding through a Route Feature Encoder to obtain parameter information
Figure BDA0002958309790000111
The process of Feature coding by the Route Feature Encoder mainly comprises the steps of carrying out quantization and vectorization coding according to information such as road condition traffic flow of candidate roads and the like, and then carrying out Feature engineering construction through a neural network.
Parameter information of each candidate route can be obtained
Figure BDA0002958309790000112
After being spliced with the state information C, the state information C is spliced to the parameter information of each candidate route through the full connection layer to obtain a fusion result
Figure BDA0002958309790000113
In the above, the finally obtained fusion result is also in one-to-one correspondence with each candidate route.
As shown in fig. 3, in some embodiments, the determining the target route based on the fusion result in step 130 includes: as follows from step 131 to step 132.
In step 131, based on the fusion result, scoring information of the candidate route is determined.
It can be understood that the candidate routes may be scored according to the fusion result corresponding to each candidate route, so as to obtain the scoring information YiScore information YiThe score value may be presented, for example, from 0 to 100, and the score of a candidate route may be 85.
Step 132, based on the scoring information, determines a target route from the candidate routes.
It is understood that the candidate route may be selected according to the scoring information, such as the candidate route with the highest scoring as the target route.
The target route is determined by the grading information, relatively objectively, and the candidate route is comprehensively evaluated in a quantitative mode, so that the objectivity of determining the target route can be improved.
Of course, by collecting the real driving scene data of the user as the training data, when the user uses the navigation product, after selecting the destination, the recommended route given by the system is RiAnd acquiring a request text L of the useriDuring which audio data is obtained as audioiThe image data is videoiThe route selected by the user is YiThe correct route and the multi-modal input form a positive example training pair with a label of 1, other routes which are not selected by the user and the multi-modal input form a negative example training pair with a label of 0, and the loss function is as follows:
Figure BDA0002958309790000121
wherein, YiSelecting the result for the user corresponds to the tagging result,
Figure BDA0002958309790000122
and predicting the result for the model.
It should be noted that, as shown in fig. 6, when acquiring the candidate Route and the parameter information corresponding to the candidate Route, a Route Feature Encoder model may be adopted, where R isiAs a candidate route, Lighti、TimeiAnd DistiThe number of traffic lights, time and distance for each route. Meanwhile, other road condition information, traffic flow, whether an accident occurs, whether construction exists and the like can be used as RiThe inner features.
Figure BDA0002958309790000123
Wherein LightmaxThe maximum number of the traffic lights is used as a quantization factor of the traffic Light quantity characteristic, and finally the traffic Light quantity quantization grading Light is obtainedqOther feature quantization levels work the same way.
Feature Embedding:FTi=Feature_Embedding_lookup(Lightq) And a set of Embedding is set for each feature dimension, the parameter can be synchronously trained with the model, and other features are calculated in the same way.
Figure BDA0002958309790000124
FL (general-purpose lamp)i;FTi;FDiObtaining final route information characteristics through full connection after splicing
Figure BDA0002958309790000125
The following describes the route selection device provided by the present invention, and the route selection device described below and the route selection method described above can be referred to correspondingly.
As shown in fig. 7, the present invention also provides a route selection device, including: an acquisition module 710, a fusion module 720, and a determination module 730.
The obtaining module 710 is configured to obtain state information of people in the vehicle and parameter information corresponding to the candidate route.
And the fusion module 720 is configured to fuse the state information and the parameter information to obtain a fusion result corresponding to the candidate route.
A determining module 730, configured to determine the target route based on the fusion result.
As shown in fig. 8, in some embodiments, the obtaining module 710 includes: a first acquisition submodule 810 and a second acquisition submodule 820.
The first obtaining sub-module 810 is configured to obtain at least one of audio data and image data of a person in the vehicle.
A second obtaining sub-module 820 for determining the status information based on at least one of the audio data and the image data.
In some embodiments, the obtaining module 710 is further configured to: acquiring target limit data; the second obtaining sub-module is further configured to: determining the status information based on the target limit data and at least one of the audio data and the image data.
In some embodiments, the second acquisition sub-module further comprises a first processing sub-unit, a second processing sub-unit, and a third processing sub-unit.
The first processing subunit is configured to perform frequency domain vectorization processing on the audio data to obtain an audio feature, and/or perform timing sequence vectorization processing on the image data to obtain an image feature.
And the second processing subunit is used for carrying out coding vectorization processing on the target limiting data to obtain a target limiting characteristic.
A third processing subunit for determining the state information based on the target limiting feature and at least one of the audio feature and the image feature.
In some embodiments, the third processing subunit is further to: fusing at least one of the audio features and the image features with the target limiting features to obtain target fusion features; obtaining coded data based on the initial marking characteristic and the target fusion characteristic; and taking the first node information of the coded data as the state information.
In some embodiments, the fusion module 720 is further configured to perform splicing processing on the state information and the parameter information to obtain the fusion result.
As shown in fig. 9, in some embodiments, the determining module 730 includes: a first determination submodule 910 and a second determination submodule 920.
A first determining sub-module 910, configured to determine scoring information of the candidate route based on the fusion result.
A second determining submodule 920, configured to determine a target route from the candidate routes based on the scoring information.
Fig. 10 illustrates a physical structure diagram of an electronic device, and as shown in fig. 10, the electronic device may include: a processor (processor)1010, a communication Interface (Communications Interface)1020, a memory (memory)1030, and a communication bus 1040, wherein the processor 1010, the communication Interface 1020, and the memory 1030 communicate with each other via the communication bus 1040. Processor 1010 may invoke logic instructions in memory 1030 to perform a routing method comprising: acquiring state information of people in the vehicle and parameter information corresponding to the candidate route; fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route; based on the fusion result, a target route is determined.
Furthermore, the logic instructions in the memory 1030 can be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform a routing method provided by the above methods, the method comprising: acquiring state information of people in the vehicle and parameter information corresponding to the candidate route; fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route; based on the fusion result, a target route is determined.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the routing methods provided above, the method comprising: acquiring state information of people in the vehicle and parameter information corresponding to the candidate route; fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route; based on the fusion result, a target route is determined.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of route selection, comprising:
acquiring state information of people in the vehicle and parameter information corresponding to the candidate route;
fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route;
and determining a target route based on the fusion result.
2. The routing method according to claim 1, wherein the acquiring of the state information of the vehicle interior personnel comprises:
acquiring at least one of audio data and image data of people in the vehicle;
determining the status information based on at least one of the audio data and the image data.
3. The routing method according to claim 2, wherein the acquiring of the state information of the person in the vehicle further comprises: acquiring target limit data;
the determining the state information based on at least one of the audio data and the image data includes:
determining the status information based on the target limit data and at least one of the audio data and the image data.
4. The routing method according to claim 3, wherein the determining the status information based on the target limitation data and at least one of the audio data and the image data comprises:
performing frequency domain vectorization processing on the audio data to obtain audio characteristics, and/or performing time sequence vectorization processing on the image data to obtain image characteristics;
performing coding vectorization processing on the target limiting data to obtain target limiting characteristics;
determining the state information based on the target limiting feature and at least one of the audio feature and the image feature.
5. The routing method of claim 4, wherein the determining the status information based on the target limiting feature and at least one of the audio feature and the image feature comprises:
fusing at least one of the audio features and the image features with the target limiting features to obtain target fusion features;
obtaining coded data based on the initial marking characteristic and the target fusion characteristic;
and taking the first node information of the coded data as the state information.
6. The method according to any one of claims 1 to 5, wherein the fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route includes:
and splicing the state information and the parameter information to obtain the fusion result.
7. The route selection method according to any one of claims 1 to 5, wherein the determining a target route based on the fusion result includes:
determining scoring information for the candidate route based on the fusion result;
determining a target route from the candidate routes based on the scoring information.
8. A routing device, comprising:
the acquisition module is used for acquiring the state information of people in the vehicle and the parameter information corresponding to the candidate route;
the fusion module is used for fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route;
and the determining module is used for determining a target route based on the fusion result.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the routing method according to any of claims 1 to 7 are implemented by the processor when executing the program.
10. A non-transitory computer readable storage medium, having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the steps of the routing method according to any one of claims 1 to 7.
CN202110231278.XA 2021-03-02 2021-03-02 Route selection method and device Active CN113034958B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110231278.XA CN113034958B (en) 2021-03-02 2021-03-02 Route selection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110231278.XA CN113034958B (en) 2021-03-02 2021-03-02 Route selection method and device

Publications (2)

Publication Number Publication Date
CN113034958A true CN113034958A (en) 2021-06-25
CN113034958B CN113034958B (en) 2023-01-17

Family

ID=76465473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110231278.XA Active CN113034958B (en) 2021-03-02 2021-03-02 Route selection method and device

Country Status (1)

Country Link
CN (1) CN113034958B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102538810A (en) * 2010-12-14 2012-07-04 国际商业机器公司 Human emotion metrics for navigation plans and maps
KR20140123281A (en) * 2013-04-12 2014-10-22 한국전자통신연구원 Route guidance apparatus based on emotion and method thereof
CN104848869A (en) * 2015-05-29 2015-08-19 李涛 Map navigation method, map navigation device and map navigation system
US20170030726A1 (en) * 2015-07-31 2017-02-02 International Business Machines Corporation Selective human emotion metrics for navigation plans and maps
US20180174457A1 (en) * 2016-12-16 2018-06-21 Wheego Electric Cars, Inc. Method and system using machine learning to determine an automotive driver's emotional state
CN108534795A (en) * 2018-06-26 2018-09-14 百度在线网络技术(北京)有限公司 Selection method, device, navigation equipment and the computer storage media of navigation routine
CN108801282A (en) * 2018-06-13 2018-11-13 新华网股份有限公司 Air navigation aid, device and the computing device of vehicle traveling
CN109211254A (en) * 2017-07-06 2019-01-15 新华网股份有限公司 information display method and device
CN110370275A (en) * 2019-07-01 2019-10-25 夏博洋 Mood chat robots based on Expression Recognition
DE102020003188A1 (en) * 2020-05-28 2020-08-13 Daimler Ag Method for the protection of personal data of a vehicle occupant

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102538810A (en) * 2010-12-14 2012-07-04 国际商业机器公司 Human emotion metrics for navigation plans and maps
US20120191338A1 (en) * 2010-12-14 2012-07-26 International Business Machines Corporation Human Emotion Metrics for Navigation Plans and Maps
KR20140123281A (en) * 2013-04-12 2014-10-22 한국전자통신연구원 Route guidance apparatus based on emotion and method thereof
CN104848869A (en) * 2015-05-29 2015-08-19 李涛 Map navigation method, map navigation device and map navigation system
US20170030726A1 (en) * 2015-07-31 2017-02-02 International Business Machines Corporation Selective human emotion metrics for navigation plans and maps
US20180174457A1 (en) * 2016-12-16 2018-06-21 Wheego Electric Cars, Inc. Method and system using machine learning to determine an automotive driver's emotional state
CN109211254A (en) * 2017-07-06 2019-01-15 新华网股份有限公司 information display method and device
CN108801282A (en) * 2018-06-13 2018-11-13 新华网股份有限公司 Air navigation aid, device and the computing device of vehicle traveling
CN108534795A (en) * 2018-06-26 2018-09-14 百度在线网络技术(北京)有限公司 Selection method, device, navigation equipment and the computer storage media of navigation routine
CN110370275A (en) * 2019-07-01 2019-10-25 夏博洋 Mood chat robots based on Expression Recognition
DE102020003188A1 (en) * 2020-05-28 2020-08-13 Daimler Ag Method for the protection of personal data of a vehicle occupant

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
闫静杰 等: "基于人脸表情和语音的双模态情感识别", 《南京邮电大学学报(自然科学版)》 *

Also Published As

Publication number Publication date
CN113034958B (en) 2023-01-17

Similar Documents

Publication Publication Date Title
US11302311B2 (en) Artificial intelligence apparatus for recognizing speech of user using personalized language model and method for the same
US11200467B2 (en) Artificial intelligence apparatus and method for recognizing object included in image data
CN110214107B (en) Autonomous vehicle providing driver education
US11282522B2 (en) Artificial intelligence apparatus and method for recognizing speech of user
US20190087734A1 (en) Information processing apparatus and information processing method
CN111402925B (en) Voice adjustment method, device, electronic equipment, vehicle-mounted system and readable medium
US10901503B2 (en) Agent apparatus, agent control method, and storage medium
US11195528B2 (en) Artificial intelligence device for performing speech recognition
US20210092086A1 (en) System and method for facilitating driver communication via an audio centric network
US11727451B2 (en) Implementing and optimizing safety interventions
US20190371297A1 (en) Artificial intelligence apparatus and method for recognizing speech of user in consideration of user's application usage log
US20200111489A1 (en) Agent device, agent presenting method, and storage medium
CN115205729A (en) Behavior recognition method and system based on multi-mode feature fusion
KR102403355B1 (en) Vehicle, mobile for communicate with the vehicle and method for controlling the vehicle
DE112020003033T5 (en) Method and apparatus for improving a geolocation database
JP2019121048A (en) Mobile body, mobile body system, destination narrowing method and program for narrowing down destination through conversation with passenger
JP7092656B2 (en) Simulation evaluation device, train operation simulation system, train operation simulation evaluation method, and program
CN113034958B (en) Route selection method and device
WO2021258671A1 (en) Assisted driving interaction method and apparatus based on vehicle-mounted digital human, and storage medium
JP2021167823A (en) Route search device, control method, program, and storage medium
JP2024019213A (en) Driving evaluation model adaptation device, terminal equipment, control method, program, and storage medium
JP7264804B2 (en) Recommendation system, recommendation method and program
JP7148296B2 (en) In-vehicle robot
JP7359208B2 (en) Information processing device, information processing method and program
KR20200000621A (en) Dialogue processing apparatus, vehicle having the same and dialogue processing method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant