CN113034958B - Route selection method and device - Google Patents
Route selection method and device Download PDFInfo
- Publication number
- CN113034958B CN113034958B CN202110231278.XA CN202110231278A CN113034958B CN 113034958 B CN113034958 B CN 113034958B CN 202110231278 A CN202110231278 A CN 202110231278A CN 113034958 B CN113034958 B CN 113034958B
- Authority
- CN
- China
- Prior art keywords
- route
- target
- data
- state information
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/09—Arrangements for giving variable traffic instructions
- G08G1/0962—Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
- G08G1/0968—Systems involving transmission of navigation instructions to the vehicle
- G08G1/096833—Systems involving transmission of navigation instructions to the vehicle where different aspects are considered when computing the route
Landscapes
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Navigation (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention provides a route selection method and a device, wherein the route selection method comprises the following steps: acquiring state information of people in the vehicle and parameter information corresponding to the candidate route; fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route; and determining a target route based on the fusion result. According to the route selection method and device provided by the invention, the state information of the personnel in the vehicle and the parameter information corresponding to the candidate route are fused, the target route is determined from the candidate route based on the fusion result, the planned route is more suitable for the physical and mental states of the personnel in the vehicle, and the driving comfort and safety can be improved.
Description
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a route selection method and a route selection device.
Background
With the development of artificial intelligence technology, automatic driving technology and assistant driving technology are gradually popularized in daily life of people, the core link of the automatic driving technology and the assistant driving technology is the planning of a vehicle driving route, and how to obtain a proper route is an important proposition of the automatic driving technology and the assistant driving technology.
At present, according to a technical scheme for route planning, a plurality of alternative routes are often given according to a starting point and a terminal point input by a user and by combining physical parameters such as vehicle types, road restrictions, weather conditions and the like, and a user selects a preferred route from the alternative routes, however, the comfort level of the route planned by the current scheme is poor, and the safety factor is low.
Disclosure of Invention
The invention provides a route selection method and a route selection device, which are used for overcoming the defects that a planned route in the prior art is poor in comfort level and low in safety factor, and realizing that the planned route is more suitable for the physical and mental states of people in a vehicle, and the driving comfort and safety can be improved.
The invention provides a route selection method, which comprises the following steps: acquiring state information of people in the vehicle and parameter information corresponding to the candidate route; fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route; and determining a target route based on the fusion result.
According to the route selection method provided by the invention, the step of acquiring the state information of people in the vehicle comprises the following steps: acquiring at least one of audio data and image data of people in the vehicle; determining the status information based on at least one of the audio data and the image data.
According to the route selection method provided by the invention, the step of acquiring the state information of people in the vehicle further comprises the following steps: acquiring target limit data; the determining the state information based on at least one of the audio data and the image data includes: determining the status information based on the target limit data and at least one of the audio data and the image data.
According to a route selection method provided by the present invention, the determining the status information based on the target restriction data and at least one of the audio data and the image data, includes: carrying out frequency domain vectorization processing on the audio data to obtain audio characteristics, and/or carrying out time sequence vectorization processing on the image data to obtain image characteristics; performing coding vectorization processing on the target limiting data to obtain target limiting characteristics; determining the state information based on the target limiting feature and at least one of the audio feature and the image feature.
According to a routing method provided by the present invention, the determining the status information based on the target restriction feature and at least one of the audio feature and the image feature includes: fusing at least one of the audio features and the image features with the target limiting features to obtain target fusion features; obtaining coded data based on the initial marking characteristic and the target fusion characteristic; and taking the first node information of the coded data as the state information.
According to a route selection method provided by the present invention, the fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route includes: and splicing the state information and the parameter information to obtain the fusion result.
According to a route selection method provided by the present invention, the determining a target route based on the fusion result includes: determining scoring information for the candidate routes based on the fusion results; determining a target route from the candidate routes based on the scoring information.
The present invention also provides a route selection device, including: the acquisition module is used for acquiring the state information of people in the vehicle and the parameter information corresponding to the candidate route; the fusion module is used for fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route; and the determining module is used for determining a target route based on the fusion result.
The invention also provides an electronic device, comprising a memory, a processor and a computer program stored on the memory and operable on the processor, wherein the processor implements the steps of any of the routing methods described above when executing the computer program.
The invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of any of the routing methods described above.
According to the embodiment of the invention, the state information of the personnel in the vehicle can be acquired from the interactive information input by the personnel in the vehicle, can also be acquired from the human body parameters monitored by the sensor in the vehicle, and can also be extracted from the voice or video monitored by the electronic equipment in the vehicle, and the state information of the personnel in the vehicle can be used for reflecting the human body states of the personnel in the vehicle, such as the physical condition, the age condition, the emotional condition and the like.
Drawings
In order to more clearly illustrate the technical solutions of the present invention or the prior art, the drawings needed for the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a schematic flow chart of a routing method provided by the present invention;
FIG. 2 is a flow chart illustrating step 110 of the routing method provided by the present invention;
FIG. 3 is a flow chart illustrating step 130 of the routing method provided by the present invention;
FIG. 4 is a schematic diagram of an algorithm of a routing method provided by the present invention;
FIG. 5 is a second schematic diagram of the algorithm of the route selection method provided by the present invention;
FIG. 6 is a third schematic diagram of the algorithm of the route selection method provided by the present invention;
FIG. 7 is a schematic diagram of a routing device according to the present invention;
FIG. 8 is a schematic diagram of an acquisition module of the routing device according to the present invention;
FIG. 9 is a schematic diagram of a determining module of the routing device provided in the present invention;
fig. 10 is a schematic structural diagram of an electronic device provided by the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The routing method and apparatus of the present invention are described below with reference to fig. 1-10.
As shown in fig. 1, the present invention provides a route selection method, including: as follows from step 110 to step 130.
And step 110, acquiring state information of people in the vehicle and parameter information corresponding to the candidate route.
It can be understood that, during the vehicle driving process, a route planning is required, at present, a route is mainly planned directly according to parameter information of the route, and a user selects from planned candidate routes by himself/herself, for example, a planning rule such as distance priority, time shortest, congestion avoidance, and high speed priority may be adopted to give options for the user to select, so as to determine path parameter information for planning, which may include: and real-time parameters of vehicles or roads, such as road condition information, traffic flow information, time information, distance information, traffic light number information and the like.
For example, the process of determining the candidate route may adopt a scheme that a navigation area is determined according to a start position and an end position by acquiring the start position and the end position, traffic light information corresponding to each intersection in the navigation area is acquired, and a target route with the shortest time for the start position to reach the end position is determined according to the traffic light information corresponding to each intersection and the current driving speed, so that the actual driving condition of the user is simulated as much as possible in the planning process of the navigation route by combining the real-time traffic light information of each intersection in the navigation area, the shortest route for reaching each intersection can be ensured to be accurately calculated, the candidate route is generated according to the shortest route for reaching the end position from the start position, the candidate route is notified to the user, and the navigation route can be recommended for the user by taking the driving time of the route as a reference.
According to the starting point and the end point input by the user, a plurality of candidate routes can be planned by combining the pre-stored map data, and each candidate route corresponds to the parameter information.
However, in the current route planning method, the actual conditions of people in the vehicle are not considered, for example, people with special physique such as pregnant women, old people, children or patients may take in the vehicle, if the planned route is congested or the road is not stable, the vehicle may stop or go and may not stop during the driving process, and the vehicle may not stop or jolt, so that the comfort of the people in the vehicle is extremely poor, and the vehicle is not good for the health of passengers with special physique, and the technical defect exists.
According to the embodiment of the invention, the state information of the people in the vehicle can be obtained from the interactive information input by the people in the vehicle, can also be obtained from the human body parameters monitored by a sensor in the vehicle, and can also be extracted from voice or video monitored by electronic equipment in the vehicle, and the state information of the people in the vehicle can be used for reflecting the human body states such as the physical condition, the age condition, the emotional condition and the like of the people in the vehicle.
And step 120, fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route.
It can be understood that the number of candidate routes may be multiple, the parameter information corresponding to each candidate route and the state information of the vehicle interior personnel acquired in the above steps are fused, the corresponding and matched features may be extracted from the state information according to the parameter information, and the fusion is performed from the feature level to obtain fusion results, where each fusion result corresponds to one candidate route.
That is to say, each candidate route corresponds to one fusion result, and the fusion result can reflect real-time parameters of vehicles or roads, such as road condition information, traffic flow information, time information, distance information, traffic light number information and the like, and physical and mental conditions of people in the vehicle, such as physical conditions, age conditions, emotional conditions and the like.
And step 130, determining a target route based on the fusion result.
It is to be understood that, since each candidate route corresponds to a fusion result, candidate routes may be evaluated according to the fusion result, for example, a level of each candidate route may be determined according to the fusion result, a high level, a medium level and a low level may be respectively assigned to the candidate routes according to the comfort level, and the candidate routes may also be ranked according to the first level, the second level and the third level \8230; \8230. The score of each candidate route may also be determined according to the fusion result, and the candidate routes may be scored according to the comfort level, for example, the score may be 0-100, and the score of a certain candidate route may be 85.
The high-level route in the candidate route can be used as the target route, if the high-level candidate route has a plurality of routes, the target routes can be presented to the user and selected by the user from the target routes, and the route with the highest level in the candidate route can be directly used as the target route.
The highest scoring route of the candidate routes may also be used as the target route.
The unique target route may be presented directly to the user or the vehicle may be controlled to drive automatically directly according to the unique target route.
According to the route selection method provided by the invention, the state information of the personnel in the vehicle and the parameter information corresponding to the candidate route are fused, the target route is determined from the candidate route based on the fusion result, the planned route is more suitable for the physical and mental states of the personnel in the vehicle, and the driving comfort and the safety can be improved.
As shown in fig. 2, in some embodiments, the step 110 of acquiring the status information of the vehicle occupant includes: step 111 to step 112 as follows.
And step 111, acquiring at least one of audio data and image data of people in the vehicle.
It is understood that a microphone may be installed in the vehicle, the microphone may be capable of collecting a sound signal in the vehicle, such as a dialogue sound of a vehicle occupant, and the sound signal collected by the microphone may be transmitted to a controller of the vehicle, and an audio data of the vehicle occupant may be formed in the controller.
The camera can be further installed in the vehicle and can shoot images or videos in the vehicle, the videos are the images shot according to a certain frame rate, the camera can shoot images of the posture, expression and movement of people in the vehicle, and the controller can acquire image data of the people in the vehicle from the camera.
Here, the controller may acquire both the audio data and the image data, or may acquire only one of the audio data and the image data.
Here, the status information of the vehicle occupant may be extracted from the audio data, for example, the audio data may include a dialog of the occupant: 'I feel sleepy and want to sleep', people in the vehicle can be confirmed to be sleepy and want to have a rest through voice recognition, and the people in the vehicle can be determined to be in a sleepy state.
Certainly, the state information of the people in the vehicle can also be extracted from the image data, for example, the image data can include a face image of the people in the vehicle, and the age of the people in the vehicle can be obtained through face recognition, for example, a person in the vehicle has an old person who is 70 years old, and here, it can be determined that the person in the vehicle has the old person.
Of course, if the controller acquires both audio data and image data, the audio data and the image data can be fused to obtain audio-video fusion data, and state information of more people in the vehicle can be extracted from the audio-video fusion data.
The state information of the persons in the vehicle can be obtained through the audio data and/or the video data, so that the fusion result contains more characteristics related to the persons in the vehicle, the obtained target route can better accord with the physical and mental states of the persons in the vehicle, and the accuracy of determining the target route can be improved.
In some embodiments, the step 110 of acquiring the status information of the vehicle interior personnel further includes: target limit data is acquired.
The target limit data is used for correcting the audio data and the image data, and the target limit data can be text data, key information extracted from conversation of people in the vehicle before the vehicle starts driving or in the vehicle driving process. The target limit data may be manually input by a user or may be voice input by the user, and the controller obtains the text data based on the manual input and the voice input by the user.
For example, before the vehicle starts driving, the driver manually inputs "go through the avenue," which is a defined condition for route planning.
For example, when a person in the vehicle speaks to "refuel the vehicle when passing through a gas station a" during the driving of the vehicle, this is necessary passing point information.
The step 112 of determining the state information based on at least one of the audio data and the image data includes: the status information is determined based on the target limit data and at least one of the audio data and the image data.
It is to be understood that the status information may be derived from the audio data and the target limit data, the status information may be derived from the image data and the target limit data, or the status information may be derived from the audio data, the image data, and the target limit data collectively.
The target limit data may be modified from audio data or video data, that is, the target limit data is prioritized over audio data or video data that is a record of the conditions in the vehicle during driving, and the degree of association of the audio data and the video data with the driving of the vehicle is weaker than the degree of association of the target limit data with the driving of the vehicle.
The audio data and the video data are corrected by adopting the target limiting data, so that the planned target route can be ensured to better meet the actual requirements of users, and key driving tasks are not missed.
As shown in fig. 4, in some embodiments, determining the status information based on the target limit data and at least one of the audio data and the image data includes:
and/or, carrying out time sequence vectorization processing on the image data to obtain the image characteristics.
Can be applied to audio data audio i Performing frequency domain vectorization, that is, performing vectorization by using frequency domain characteristics, modeling by using a Long Short-Term Memory (LSTM), and performing audio data audio i Input into LSTM, output audio characteristic A i 。
Can be used for image data video i The time sequence vectorization processing can be carried out by firstly carrying out the video processing on the image data i Performing coding dimension reduction through CNN (convolutional neural network), performing time series modeling through LSTM, taking the last node of the LSTM output layer, and outputting image characteristics V i 。
And carrying out coding vectorization processing on the target limiting data to obtain target limiting characteristics.
The target limitation data may be text data L i Audio data audio i Can be compared with text data L i Correspondingly, image data video i Or with text data L i And correspondingly.
The text data L can be converted into i Directly coding through a BERT Input Embedding layer to obtain a target limit characteristic E i Target limit feature E i The method can be a text feature, namely a process of encoding vectorization processing, and the process of encoding vectorization processing can be obtained by performing text information table lookup according to model training parameters.
The state information is determined based on the target limit feature and at least one of the audio feature and the image feature.
The audio feature and the target restriction feature may be combined to obtain the state information, the image feature and the target restriction feature may be combined to obtain the state information, and the audio feature, the image feature and the target restriction feature may be combined to obtain the state information.
In some embodiments, determining the status information based on the target limiting feature and at least one of the audio feature and the image feature comprises:
fusing at least one of the audio features and the image features with the target limiting features to obtain target fusion features;
the target may be restricted to a characteristic E i Audio feature A i And image feature V i As input of the Multi-Modal Merge Gate, the target fusion characteristics are output
As shown in FIG. 5, in some embodiments, the target limit feature E may be first defined i Audio feature A i And image feature V i Fusing by using an Attention Gate algorithm to obtain reference fusion characteristics
The formula of the Attention Gate algorithm can be
Wherein, E i For the target limiting feature, A i For audio features, V i For image features, other parameters are trainable parameters,is a function of the gate of the image features,the function of the gate function is the selection and filtering of image characteristics and audio characteristics.
Then the reference is fused with the featureAnd a target limit feature E i Fusing through Merge algorithm to obtain target fusion characteristics
The formula for the Merge algorithm may be:
wherein both alpha and beta are hyperparameters.
Obtaining coded data based on the initial marking characteristic and the target fusion characteristic; the first node information of the encoded data is used as the state information.
It will be appreciated that the initial marking feature E CLS The initial mark feature E can be obtained according to an artificially constructed special initial mark CLS which can be a constant, and the CLS can be directly coded through a BERT Input Embedding layer to obtain the initial mark feature E CLS 。
The initial mark may be characterized as E CLS And object fusion featuresPerforming characteristic coding by a BERT Encoder to obtain coded data T 1 m Taking coded data T 1 m The first node information C, that is, the first node information C of the BERT Encoder model output layer, can obtain global information, that is, the overall characteristics of a text, an image and an audio because BERT Encoder has a bidirectional sequence modeling characteristic, and uses C as the global characteristic vector obtained after modeling of the whole multi-modal characteristic sequence as the state information of the personnel in the vehicle.
It should be noted that the BERT may be replaced by sequence modeling models such as LSTM and GPT (generated Pre-Training), and the embodiment is not limited thereto.
In some embodiments, the fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route includes: and splicing the state information and the parameter information to obtain a fusion result.
It can be understood that the road condition data R of the candidate route i The feature information of each candidate route is included in the route information, such as: real-time route information such as road condition information, traffic flow information, time information, distance information and traffic light number information, and road condition data R i The multiple characteristics are subjected to characteristic coding through a Route Feature Encoder to obtain parameter informationThe process of Feature coding by the Route Feature Encoder mainly comprises the steps of carrying out quantization and vectorization coding according to information such as road condition and traffic flow of candidate roads and then carrying out Feature engineering construction through a neural network.
The parameter information of each candidate route can be determinedAfter being spliced with the state information C, the state information C is spliced to the parameter information of each candidate route through a full connection layer to obtain a fusion resultIn the above, the finally obtained fusion result is also in one-to-one correspondence with each candidate route.
As shown in fig. 3, in some embodiments, the step 130 of determining the target route based on the fusion result includes: as follows from step 131 to step 132.
In step 131, based on the fusion result, scoring information of the candidate route is determined.
It can be understood that the candidate routes may be scored according to the fusion result corresponding to each candidate route, so as to obtain scoring information Y i Score information Y i The score value may be presented, for example, from 0 to 100, and the score of a candidate route may be 85.
It is understood that the candidate route may be selected according to the scoring information, such as the candidate route with the highest scoring as the target route.
The objective route is determined by the scoring information, relatively objectively, and the candidate routes are comprehensively evaluated in a quantitative manner, so that the objectivity of determining the objective route can be improved.
Of course, by collecting the real driving scene data of the user as the training data, when the user uses the navigation product, after selecting the destination, the recommended route given by the system is R i And acquiring a request query text L of the user i During which audio data is obtained as audio i The image data being video i The route selected by the user is Y i The correct route and the multi-modal input form a positive example training pair with a label of 1, other routes which are not selected by the user and the multi-modal input form a negative example training pair with a label of 0, and the loss function is as follows:
wherein Y is i Selecting the result for the user corresponds to the tagging result,the results are predicted for the model.
It should be noted that, as shown in fig. 6, when acquiring the candidate Route and the parameter information corresponding to the candidate Route, a Route Feature Encoder model may be adopted, where R is i As a candidate route, light i 、Time i And Dist i The number of traffic lights, time consumption and distance for each route. Meanwhile, other road condition information, traffic flow, whether an accident occurs, whether construction exists and the like can be used as the R i The inner features.
Wherein Light max In order to maximize the number of traffic lights,as the quantization factor of the traffic Light quantity characteristic, the traffic Light quantity quantization grading Light is obtained finally q Other feature quantization levels are the same.
Feature Embedding:FT i =Feature_Embedding_lookup(Light q ) And a group of Embedding is set for each feature dimension, the parameter can be synchronously trained with the model, and other features are calculated in the same way.
FL (general-purpose lamp) i ;FT i ;FD i Obtaining final route information characteristics through full connection after splicing
The following describes the route selection device provided by the present invention, and the route selection device described below and the route selection method described above can be referred to correspondingly.
As shown in fig. 7, the present invention also provides a route selection device, including: an acquisition module 710, a fusion module 720, and a determination module 730.
The obtaining module 710 is configured to obtain state information of people inside the vehicle and parameter information corresponding to the candidate route.
And the fusion module 720 is configured to fuse the state information and the parameter information to obtain a fusion result corresponding to the candidate route.
A determining module 730, configured to determine the target route based on the fusion result.
As shown in fig. 8, in some embodiments, the obtaining module 710 includes: a first fetch submodule 810 and a second fetch submodule 820.
The first obtaining sub-module 810 is configured to obtain at least one of audio data and image data of a person in the vehicle.
A second obtaining sub-module 820 for determining the status information based on at least one of the audio data and the image data.
In some embodiments, the obtaining module 710 is further configured to: acquiring target limit data; the second obtaining sub-module is further configured to: determining the status information based on the target limit data and at least one of the audio data and the image data.
In some embodiments, the second acquisition sub-module further comprises a first processing sub-unit, a second processing sub-unit, and a third processing sub-unit.
The first processing subunit is configured to perform frequency domain vectorization processing on the audio data to obtain an audio feature, and/or perform timing sequence vectorization processing on the image data to obtain an image feature.
And the second processing subunit is used for performing coding vectorization processing on the target limiting data to obtain a target limiting characteristic.
A third processing subunit for determining the status information based on the target limiting feature and at least one of the audio feature and the image feature.
In some embodiments, the third processing subunit is further to: fusing at least one of the audio features and the image features with the target limiting features to obtain target fusion features; obtaining coded data based on the initial marking characteristic and the target fusion characteristic; and taking the first node information of the coded data as the state information.
In some embodiments, the fusion module 720 is further configured to perform splicing processing on the state information and the parameter information to obtain the fusion result.
As shown in fig. 9, in some embodiments, the determining module 730 includes: a first determination submodule 910 and a second determination submodule 920.
A first determining sub-module 910, configured to determine scoring information of the candidate route based on the fusion result.
A second determining submodule 920, configured to determine a target route from the candidate routes based on the scoring information.
Fig. 10 illustrates a physical structure diagram of an electronic device, which may include, as shown in fig. 10: a processor (processor) 1010, a communication Interface (Communications Interface) 1020, a memory (memory) 1030, and a communication bus 1040, wherein the processor 1010, the communication Interface 1020, and the memory 1030 communicate with each other via the communication bus 1040. Processor 1010 may invoke logic instructions in memory 1030 to perform a routing method comprising: acquiring state information of people in the vehicle and parameter information corresponding to the candidate route; fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route; based on the fusion result, a target route is determined.
Furthermore, the logic instructions in the memory 1030 can be implemented in software functional units and stored in a computer readable storage medium when the logic instructions are sold or used as independent products. Based on such understanding, the technical solution of the present invention or a part thereof which substantially contributes to the prior art may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
In another aspect, the present invention also provides a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform a routing method provided by the above methods, the method comprising: acquiring state information of people in the vehicle and parameter information corresponding to the candidate route; fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route; based on the fusion result, a target route is determined.
In yet another aspect, the present invention also provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, is implemented to perform the routing methods provided above, the method comprising: acquiring state information of people in the vehicle and parameter information corresponding to the candidate route; fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route; based on the fusion result, a target route is determined.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (8)
1. A method of route selection, comprising:
acquiring state information of people in the vehicle and parameter information corresponding to the candidate route; the acquiring of the state information of the personnel in the vehicle comprises the following steps: acquiring at least one of audio data and image data of people in the vehicle; determining the status information based on at least one of the audio data and the image data; the acquiring of the state information of the personnel in the vehicle further comprises: acquiring target limit data; the determining the status information based on at least one of the audio data and the image data comprises: determining the status information based on the target limit data and at least one of the audio data and the image data;
fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route; the fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route includes: extracting corresponding matched features from the state information based on the parameter information, and fusing from a feature level to obtain a fusion result;
and determining a target route based on the fusion result.
2. The routing method according to claim 1, wherein the determining the status information based on the target limitation data and at least one of the audio data and the image data comprises:
performing frequency domain vectorization processing on the audio data to obtain audio characteristics, and/or performing time sequence vectorization processing on the image data to obtain image characteristics;
performing coding vectorization processing on the target limiting data to obtain target limiting characteristics;
determining the status information based on the target limiting feature and at least one of the audio feature and the image feature.
3. The routing method according to claim 2, wherein the determining the status information based on the target limiting feature and at least one of the audio feature and the image feature comprises:
fusing at least one of the audio features and the image features with the target limiting features to obtain target fusion features;
obtaining coded data based on the initial marking characteristic and the target fusion characteristic;
and taking the first node information of the coded data as the state information.
4. The method according to any one of claims 1 to 3, wherein the fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route includes:
and splicing the state information and the parameter information to obtain the fusion result.
5. The route selection method according to any one of claims 1 to 3, wherein the determining a target route based on the fusion result includes:
determining scoring information for the candidate routes based on the fusion results;
determining a target route from the candidate routes based on the scoring information.
6. A routing device, comprising:
the acquisition module is used for acquiring state information of people in the vehicle and parameter information corresponding to the candidate route; the acquiring of the state information of the personnel in the vehicle comprises the following steps: acquiring at least one of audio data and image data of people in the vehicle; determining the status information based on at least one of the audio data and the image data; the acquiring of the state information of the personnel in the vehicle further comprises: acquiring target limit data; the determining the state information based on at least one of the audio data and the image data includes: determining the status information based on the target limit data and at least one of the audio data and the image data;
the fusion module is used for fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route; the fusing the state information and the parameter information to obtain a fusion result corresponding to the candidate route includes: extracting corresponding matched features from the state information based on the parameter information, and fusing from a feature level to obtain a fusion result;
and the determining module is used for determining a target route based on the fusion result.
7. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the steps of the routing method according to any of claims 1 to 5 are implemented by the processor when executing the program.
8. A non-transitory computer readable storage medium, having stored thereon a computer program, wherein the computer program, when executed by a processor, implements the steps of the routing method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110231278.XA CN113034958B (en) | 2021-03-02 | 2021-03-02 | Route selection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110231278.XA CN113034958B (en) | 2021-03-02 | 2021-03-02 | Route selection method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113034958A CN113034958A (en) | 2021-06-25 |
CN113034958B true CN113034958B (en) | 2023-01-17 |
Family
ID=76465473
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110231278.XA Active CN113034958B (en) | 2021-03-02 | 2021-03-02 | Route selection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113034958B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113902209A (en) * | 2021-10-20 | 2022-01-07 | 中国联合网络通信集团有限公司 | Travel route recommendation method, edge server, cloud server, equipment and medium |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102020003188A1 (en) * | 2020-05-28 | 2020-08-13 | Daimler Ag | Method for the protection of personal data of a vehicle occupant |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8364395B2 (en) * | 2010-12-14 | 2013-01-29 | International Business Machines Corporation | Human emotion metrics for navigation plans and maps |
KR102043637B1 (en) * | 2013-04-12 | 2019-11-12 | 한국전자통신연구원 | Route guidance apparatus based on emotion and method thereof |
CN104848869B (en) * | 2015-05-29 | 2017-12-19 | 李涛 | A kind of digital map navigation method, apparatus and system |
US20170030726A1 (en) * | 2015-07-31 | 2017-02-02 | International Business Machines Corporation | Selective human emotion metrics for navigation plans and maps |
US10192171B2 (en) * | 2016-12-16 | 2019-01-29 | Autonomous Fusion, Inc. | Method and system using machine learning to determine an automotive driver's emotional state |
CN109211254A (en) * | 2017-07-06 | 2019-01-15 | 新华网股份有限公司 | information display method and device |
CN108801282A (en) * | 2018-06-13 | 2018-11-13 | 新华网股份有限公司 | Air navigation aid, device and the computing device of vehicle traveling |
CN108534795A (en) * | 2018-06-26 | 2018-09-14 | 百度在线网络技术(北京)有限公司 | Selection method, device, navigation equipment and the computer storage media of navigation routine |
CN110370275A (en) * | 2019-07-01 | 2019-10-25 | 夏博洋 | Mood chat robots based on Expression Recognition |
-
2021
- 2021-03-02 CN CN202110231278.XA patent/CN113034958B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102020003188A1 (en) * | 2020-05-28 | 2020-08-13 | Daimler Ag | Method for the protection of personal data of a vehicle occupant |
Also Published As
Publication number | Publication date |
---|---|
CN113034958A (en) | 2021-06-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11200467B2 (en) | Artificial intelligence apparatus and method for recognizing object included in image data | |
US11302311B2 (en) | Artificial intelligence apparatus for recognizing speech of user using personalized language model and method for the same | |
US11270694B2 (en) | Artificial intelligence apparatus and method for recognizing speech by correcting misrecognized word | |
US20190087734A1 (en) | Information processing apparatus and information processing method | |
US20200027459A1 (en) | Artificial intelligence apparatus and method for recognizing speech of user | |
US20200043478A1 (en) | Artificial intelligence apparatus for performing speech recognition and method thereof | |
US11195528B2 (en) | Artificial intelligence device for performing speech recognition | |
US11727451B2 (en) | Implementing and optimizing safety interventions | |
CN110286745B (en) | Dialogue processing system, vehicle with dialogue processing system, and dialogue processing method | |
US11450316B2 (en) | Agent device, agent presenting method, and storage medium | |
US20200058290A1 (en) | Artificial intelligence apparatus for correcting synthesized speech and method thereof | |
CN111402925A (en) | Voice adjusting method and device, electronic equipment, vehicle-mounted system and readable medium | |
KR102403355B1 (en) | Vehicle, mobile for communicate with the vehicle and method for controlling the vehicle | |
CN111401259B (en) | Model training method, system, computer readable medium and electronic device | |
US20210354726A1 (en) | System and method for improving interaction of a plurality of autonomous vehicles with a driving environment including said vehicles | |
CN113034958B (en) | Route selection method and device | |
US11468247B2 (en) | Artificial intelligence apparatus for learning natural language understanding models | |
US20190371297A1 (en) | Artificial intelligence apparatus and method for recognizing speech of user in consideration of user's application usage log | |
CN113784199B (en) | System, method, storage medium and electronic device for generating video description text | |
DE112020003033T5 (en) | Method and apparatus for improving a geolocation database | |
CN115205729A (en) | Behavior recognition method and system based on multi-mode feature fusion | |
US20210334461A1 (en) | Artificial intelligence apparatus and method for generating named entity table | |
WO2021258671A1 (en) | Assisted driving interaction method and apparatus based on vehicle-mounted digital human, and storage medium | |
JP6785889B2 (en) | Service provider | |
CN108960191B (en) | Multi-mode fusion emotion calculation method and system for robot |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |