CN116399347A - Vehicle navigation method, system, device and storage medium based on wearable equipment - Google Patents

Vehicle navigation method, system, device and storage medium based on wearable equipment Download PDF

Info

Publication number
CN116399347A
CN116399347A CN202310395563.4A CN202310395563A CN116399347A CN 116399347 A CN116399347 A CN 116399347A CN 202310395563 A CN202310395563 A CN 202310395563A CN 116399347 A CN116399347 A CN 116399347A
Authority
CN
China
Prior art keywords
target
positioning information
vehicle
navigation
navigation route
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310395563.4A
Other languages
Chinese (zh)
Inventor
曾丽吟
庞丽君
杨凯军
梁翠燕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GAC Honda Automobile Co Ltd
Guangqi Honda Automobile Research and Development Co Ltd
Original Assignee
GAC Honda Automobile Co Ltd
Guangqi Honda Automobile Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GAC Honda Automobile Co Ltd, Guangqi Honda Automobile Research and Development Co Ltd filed Critical GAC Honda Automobile Co Ltd
Priority to CN202310395563.4A priority Critical patent/CN116399347A/en
Publication of CN116399347A publication Critical patent/CN116399347A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The invention discloses a vehicle navigation method, a system, a device and a storage medium based on wearable equipment, comprising the following steps: acquiring first positioning information of a target wearable device and second positioning information of a target vehicle, generating a first navigation route according to the first positioning information and the second positioning information, and sending the first navigation route to the target vehicle; acquiring environment image information shot by target wearable equipment, and determining environment image time sequence data according to the environment image information; determining a target area according to the first positioning information, acquiring an action intention recognition model of the target area trained in advance, and further inputting the environment image time sequence data into the action intention recognition model to obtain an action intention recognition result of a target person; and adjusting the first navigation route according to the action intention recognition result to obtain a second navigation route, and sending the second navigation route to the target vehicle. The invention improves the running efficiency and the navigation accuracy of the vehicle and can be widely applied to the technical field of vehicle navigation.

Description

Vehicle navigation method, system, device and storage medium based on wearable equipment
Technical Field
The invention relates to the technical field of vehicle navigation, in particular to a vehicle navigation method, system and device based on wearable equipment and a storage medium.
Background
Automobiles are a common vehicle and are indispensable in people's life. In daily life, the situation that the driver needs to drive to go to a destination to pick up a child or a child in home or the driver needs to repeatedly communicate with the opposite party to determine a specific position and then input the specific position into a vehicle navigation system for navigation often occurs, the efficiency is low, mistakes are easy to occur through oral description, and the accuracy of vehicle navigation is also influenced.
A wearable device is a portable device that may be worn directly on a user or integrated into a user's clothing or accessories. In the prior art, a wearable device such as a smart watch and an electronic bracelet is usually provided for a child or a lifetime in a home, and positioning information of the child or the lifetime is acquired in real time through the wearable device, so that vehicle navigation can be performed according to the positioning information. Such methods suffer from the following drawbacks: 1) Depending on the accuracy of positioning excessively, even if small deviation occurs in positioning (such as the actual position is on the left side of the road and the right side of the road), large errors are brought to navigation, so that the running efficiency of the vehicle and the accuracy of navigation are affected; 2) The fact that the opposite party can move on the way of driving is not considered, so that deviation occurs in navigation, the driving efficiency of the vehicle and the navigation accuracy are affected, and people carrying the wearable equipment cannot be found quickly and accurately.
Disclosure of Invention
The present invention aims to solve at least one of the technical problems existing in the prior art to a certain extent.
Therefore, an object of the embodiments of the present invention is to provide a vehicle navigation method based on a wearable device, which improves the efficiency of vehicle driving and the accuracy of navigation, so that a vehicle owner can quickly and accurately find out a person carrying the wearable device.
Another object of an embodiment of the present invention is to provide a vehicle navigation system based on a wearable device.
In order to achieve the technical purpose, the technical scheme adopted by the embodiment of the invention comprises the following steps:
in a first aspect, an embodiment of the present invention provides a vehicle navigation method based on a wearable device, including the following steps:
acquiring first positioning information of a target wearable device and second positioning information of a target vehicle, generating a first navigation route according to the first positioning information and the second positioning information, and sending the first navigation route to the target vehicle;
acquiring environment image information shot by the target wearable equipment, and determining environment image time sequence data according to the environment image information;
determining a target area according to the first positioning information, acquiring a pre-trained action intention recognition model of the target area, and further inputting the environment image time sequence data into the action intention recognition model to obtain an action intention recognition result of a target person;
Adjusting the first navigation route according to the action intention recognition result to obtain a second navigation route, and sending the second navigation route to the target vehicle;
the target person carries the target wearable device, and the target wearable device and the target vehicle are mutually bound.
Further, in one embodiment of the present invention, the step of acquiring the first positioning information of the target wearable device and the second positioning information of the target vehicle, generating a first navigation route according to the first positioning information and the second positioning information, and transmitting the first navigation route to the target vehicle specifically includes:
acquiring first positioning information of the target wearable device, second positioning information of the target vehicle and real-time road condition information through a cloud platform;
determining a navigation starting point position according to the second positioning information, and determining a navigation end point position according to the first positioning information;
and planning a path according to the navigation place position, the navigation end position and the real-time road condition information to obtain the first navigation route, and sending the first navigation route to the target vehicle through the cloud platform.
Further, in one embodiment of the present invention, the step of acquiring the environmental image information captured by the target wearable device and determining the environmental image time sequence data according to the environmental image information specifically includes:
invoking a camera of the target wearable device through a cloud platform, and acquiring the environment image information shot by the camera at a preset sampling frequency;
and sequencing the environmental image information according to the sampling time to obtain the environmental image time sequence data.
Further, in one embodiment of the present invention, the vehicle navigation method further includes a step of pre-training an action intention recognition model, which specifically includes:
respectively acquiring a plurality of environmental sample time sequence data shot by wearable equipment carried by a tester when the tester moves in a plurality of preset areas, and determining action intention labels of the environmental sample time sequence data according to the movement direction of the tester;
constructing a training sample set according to the environment sample time sequence data and the corresponding action intention labels;
inputting the training sample set into a pre-constructed bidirectional circulating neural network for training, and optimizing model parameters of the bidirectional circulating neural network to obtain a trained action intention recognition model;
Determining the region labels of the action intention recognition models according to the preset regions to obtain the action intention recognition models of a plurality of different region labels;
the bidirectional circulating neural network comprises an input layer, a forward hiding layer, a reverse hiding layer and an output layer.
Further, in an embodiment of the present invention, the step of inputting the training sample set into a pre-constructed bidirectional recurrent neural network to perform training, and optimizing model parameters of the bidirectional recurrent neural network to obtain a trained action intention recognition model specifically includes:
inputting the environmental sample time sequence data to the input layer, and calculating to obtain a first hidden state vector through the forward hidden layer;
inputting the time sequence data of the environmental samples into the input layer in a reverse order, and calculating to obtain a second hidden state vector through the reverse hidden layer;
performing reverse order processing on the second hidden state vector to obtain a third hidden state vector, and performing splicing processing on the first hidden state vector and the third hidden state vector to obtain a fourth hidden state vector;
inputting the fourth hidden state vector to the output layer, and outputting to obtain an action intention prediction result;
Determining a loss value of the bidirectional recurrent neural network according to the action intention prediction result and the action intention label;
updating model parameters of the bidirectional circulating neural network according to the loss value, and returning to the step of inputting the environment sample time sequence data to the input layer;
and stopping training when the loss value reaches a preset first threshold value or the iteration number reaches a preset second threshold value, and obtaining the trained action intention recognition model.
Further, in one embodiment of the present invention, the step of determining a target area according to the first positioning information and acquiring a pre-trained action intention recognition model of the target area specifically includes:
and determining a corresponding target area according to the first positioning information, matching the target area with the area tag, and determining the action intention recognition model corresponding to the target area.
Further, in one embodiment of the present invention, the step of adjusting the first navigation route according to the action intention recognition result to obtain a second navigation route specifically includes:
predicting a movement route of the target person according to the movement intention recognition result and the first positioning information;
Predicting a first running duration of the target vehicle according to the first navigation route, and predicting a target position of the target person according to the first running duration and the action route;
and adjusting the navigation end point of the first navigation route according to the target position to obtain the second navigation route.
In a second aspect, an embodiment of the present invention provides a vehicle navigation system based on a wearable device, including:
the navigation route generation module is used for acquiring first positioning information of a target wearable device and second positioning information of a target vehicle, generating a first navigation route according to the first positioning information and the second positioning information, and sending the first navigation route to the target vehicle;
the environment image acquisition module is used for acquiring environment image information shot by the target wearable equipment and determining environment image time sequence data according to the environment image information;
the action intention recognition module is used for determining a target area according to the first positioning information, acquiring an action intention recognition model of the target area trained in advance, and further inputting the environment image time sequence data into the action intention recognition model to obtain an action intention recognition result of a target person;
The navigation route adjustment module is used for adjusting the first navigation route according to the action intention recognition result to obtain a second navigation route and sending the second navigation route to the target vehicle;
the target person carries the target wearable device, and the target wearable device and the target vehicle are mutually bound.
In a third aspect, an embodiment of the present invention provides a vehicle navigation apparatus based on a wearable device, including:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement a wearable device-based vehicle navigation method as described above.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, in which a processor executable program is stored, the processor executable program being configured to perform the above-described vehicle navigation method based on a wearable device when executed by a processor.
The advantages and benefits of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
According to the embodiment of the invention, the first positioning information of the target wearable equipment and the second positioning information of the target vehicle are acquired, a first navigation route is generated according to the first positioning information and the second positioning information, the first navigation route is sent to the target vehicle, then the environment image information shot by the target wearable equipment is acquired in real time, the environment image time sequence data is determined according to the environment image information, the target area is determined according to the first positioning information, the action intention recognition model of the pre-trained target area is acquired, the environment image time sequence data is input into the action intention recognition model, the action intention recognition result of a target person is obtained, the first navigation route is regulated according to the action intention recognition result to obtain a second navigation route, and the second navigation route is sent to the target vehicle. According to the embodiment of the invention, after the initial navigation route is generated according to the positioning information, the environment image shot by the target wearable equipment is acquired in real time and the time sequence data is formed, so that the action intention of the target personnel is identified, and the initial navigation route can be adjusted in real time according to the action intention, so that even if the initial positioning information of the target personnel deviates or the target personnel continuously moves, the target vehicle can dynamically adjust the driving route in real time in the driving process, the driving efficiency and the navigation accuracy of the vehicle are improved, and a vehicle owner can more conveniently and rapidly find the personnel carrying the wearable equipment.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the following description will refer to the drawings that are needed in the embodiments of the present invention, and it should be understood that the drawings in the following description are only for convenience and clarity to describe some embodiments in the technical solutions of the present invention, and other drawings may be obtained according to these drawings without any inventive effort for those skilled in the art.
Fig. 1 is a step flowchart of a vehicle navigation method based on a wearable device according to an embodiment of the present invention;
fig. 2 is a block diagram of a vehicle navigation system based on a wearable device according to an embodiment of the present invention;
fig. 3 is a block diagram of a vehicle navigation device based on a wearable device according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention. The step numbers in the following embodiments are set for convenience of illustration only, and the order between the steps is not limited in any way, and the execution order of the steps in the embodiments may be adaptively adjusted according to the understanding of those skilled in the art.
In the description of the present invention, the plurality means two or more, and if the description is made to the first and second for the purpose of distinguishing technical features, it should not be construed as indicating or implying relative importance or implicitly indicating the number of the indicated technical features or implicitly indicating the precedence of the indicated technical features. Furthermore, unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art.
Referring to fig. 1, an embodiment of the present invention provides a vehicle navigation method based on a wearable device, which specifically includes the following steps:
s101, acquiring first positioning information of a target wearable device and second positioning information of a target vehicle, generating a first navigation route according to the first positioning information and the second positioning information, and sending the first navigation route to the target vehicle.
In the embodiment of the invention, the target wearable equipment can be intelligent watch, electronic bracelet and other equipment, and is carried by target personnel; the target wearable device and the target vehicle establish a binding relationship in advance under the condition that related personnel know and authorize, and when a vehicle owner of the target vehicle needs to search the target personnel through the target wearable device and drive the vehicle to a corresponding place, the method provided by the embodiment of the invention can be adopted.
Specifically, positioning information of the target wearable device and the target vehicle is obtained, an initial first navigation route is generated according to the positioning information of the target wearable device and the target vehicle and combined with real-time road conditions and fed back to the target vehicle, at the moment, a vehicle owner of the target vehicle can drive according to the first navigation route, and in the subsequent step, the first navigation route can be adjusted in real time according to the embodiment of the invention. The step S101 specifically includes the following steps:
s1011, acquiring first positioning information of a target wearable device, second positioning information of a target vehicle and real-time road condition information through a cloud platform;
s1012, determining a navigation starting point position according to the second positioning information, and determining a navigation end point position according to the first positioning information;
and S1013, carrying out path planning according to the navigation location position, the navigation end position and the real-time road condition information to obtain a first navigation route, and sending the first navigation route to the target vehicle through the cloud platform.
Specifically, in the embodiment of the invention, the target wearable device and the target vehicle are connected through the cloud platform, the cloud platform performs path planning by combining real-time road condition information after acquiring positioning information of the target wearable device and the target vehicle, a first navigation route taking the current position of the target vehicle as a point and the current position of the target wearable device (namely the current position of a target person) as an end point can be obtained, and the first navigation route is displayed through the vehicle-mounted terminal of the target vehicle, so that a vehicle owner of the target vehicle can drive according to the first navigation route.
S102, acquiring environment image information shot by the target wearable device, and determining environment image time sequence data according to the environment image information.
Specifically, in the embodiment of the invention, the target wearable device is provided with the camera which can be remotely called by the cloud platform and is used for shooting the environmental information around the target wearable device. It can be understood that the cloud platform needs to be authorized in advance to remotely call the camera, and the captured environment image is only stored in the cloud platform, and is only used for identifying the action intention of the target person in general, and can be obtained through legal ways in special cases, so that the privacy information of the target person is protected to the greatest extent. The step S102 specifically includes the following steps:
s1021, calling a camera of the target wearable device through the cloud platform, and acquiring environment image information shot by the camera at a preset sampling frequency;
and S1022, sorting the environmental image information according to the sampling time to obtain the environmental image time sequence data.
Specifically, the camera of the target wearable device can be called through the cloud platform to perform continuous image acquisition, the sampling frequency can be set to be 2Hz, namely, sampling is performed 2 times per second, and a plurality of frames (such as 10 frames) of environment images which are sampled continuously can be sequenced to obtain environment image time sequence data, wherein the environment image time sequence data can reflect the action intention (such as the moving direction) of a target person.
S103, determining a target area according to the first positioning information, acquiring an action intention recognition model of the target area trained in advance, and further inputting the environment image time sequence data into the action intention recognition model to obtain an action intention recognition result of a target person.
Specifically, the positioning information of the target wearable device can determine the current target area of the target wearable device, and then obtain the action intention recognition model corresponding to the target area, so that the action intention of the target person can be recognized. The training process of the action intention recognition model according to the embodiment of the present invention is described below.
Further as an optional embodiment, the vehicle navigation method further includes a step of pre-training an action intention recognition model, which specifically includes:
a1, respectively acquiring a plurality of environmental sample time sequence data shot by wearable equipment carried by a tester when the tester moves in a plurality of preset areas, and determining action intention labels of the environmental sample time sequence data according to the moving direction of the tester;
a2, constructing a training sample set according to the environment sample time sequence data and the corresponding action intention labels;
a3, inputting a training sample set into a pre-constructed bidirectional circulating neural network for training, and optimizing model parameters of the bidirectional circulating neural network to obtain a trained action intention recognition model;
A4, determining area labels of the action intention recognition models according to the preset areas to obtain action intention recognition models of a plurality of different area labels;
the bidirectional circulating neural network comprises an input layer, a forward hiding layer, a reverse hiding layer and an output layer.
Specifically, when a training sample set is constructed, environmental sample time sequence data are formed according to environmental images shot by wearable equipment carried by a tester when the tester moves in different areas, and data with enough quantity, multiple places in the areas and different movement directions are collected as training samples, so that the training sample set can be formed.
The embodiment of the invention adopts a bidirectional cyclic neural network training action intention recognition model, and the basic idea of the bidirectional cyclic neural network (BRNN) is that two cyclic neural networks (RNNs) are respectively adopted for the forward direction and the reverse direction of each training sequence to calculate the hidden state, and the calculated result is integrated and output through an output layer. According to the embodiment of the invention, the moving direction of the tester can be represented by the environmental sample time sequence data from the forward direction and the reverse direction, so that the environmental sample time sequence data can be subjected to reinforcement learning through the bidirectional circulating neural network, and the accuracy of model identification is improved.
After training to obtain the action intention recognition model, the region labels are marked according to the regions corresponding to the action intention recognition model, so that the action intention recognition models of a plurality of different region labels are needed.
Further as an optional implementation manner, the step of inputting the training sample set into a pre-constructed bidirectional cyclic neural network to train and optimize model parameters of the bidirectional cyclic neural network to obtain a trained action intention recognition model A3 specifically includes:
a31, inputting the environmental sample time sequence data into an input layer, and calculating to obtain a first hidden state vector through a forward hidden layer;
a32, inputting the time sequence data of the environmental samples into the input layer in an inverted sequence, and calculating by the reverse hiding layer to obtain a second hiding state vector;
a33, performing reverse order processing on the second hidden state vector to obtain a third hidden state vector, and performing splicing processing on the first hidden state vector and the third hidden state vector to obtain a fourth hidden state vector;
a34, inputting the fourth hidden state vector into an output layer, and outputting to obtain an action intention prediction result;
a35, determining a loss value of the bidirectional circulating neural network according to the action intention prediction result and the action intention label;
A36, updating model parameters of the bidirectional circulating neural network according to the loss value, and returning to the step of inputting the environment sample time sequence data to the input layer;
a37, stopping training when the loss value reaches a preset first threshold value or the iteration number reaches a preset second threshold value, and obtaining a trained action intention recognition model.
Specifically, after data in the training data set is input into the initialized bidirectional cyclic neural network, the recognition result output by the model, namely the movement intention prediction result, can be obtained through calculation and splicing processing of the forward hidden state vector and the reverse hidden state vector, and the accuracy of model recognition can be evaluated according to the movement intention prediction result and the movement intention label, so that parameters of the model are updated. For the action intention recognition model, the accuracy of the model recognition result can be measured by a Loss Function (Loss Function), which is defined on single training data and is used for measuring the prediction error of one training data, specifically determining the Loss value of the training data through the label of the single training data and the prediction result of the model on the training data. In actual training, one training data set has a lot of training data, so that a Cost Function (Cost Function) is generally adopted to measure the overall error of the training data set, and the Cost Function is defined on the whole training data set and is used for calculating the average value of the prediction errors of all the training data, so that the prediction effect of the model can be better measured. For a general machine learning model, based on the cost function, a regular term for measuring the complexity of the model can be used as a training objective function, and based on the objective function, the loss value of the whole training data set can be obtained. There are many kinds of common loss functions, such as 0-1 loss function, square loss function, absolute loss function, logarithmic loss function, cross entropy loss function, etc., which can be used as the loss function of the machine learning model, and will not be described in detail herein. In the embodiment of the invention, one loss function can be selected to determine the loss value of training. Based on the trained loss value, updating the parameters of the model by adopting a back propagation algorithm, and iterating for several rounds to obtain the trained action intention recognition model. Specifically, the number of iteration rounds may be preset, or training may be considered complete when the test set meets the accuracy requirements.
Further as an optional implementation manner, the step of determining the target area according to the first positioning information and acquiring the action intention recognition model of the pre-trained target area specifically includes:
and determining a corresponding target area according to the first positioning information, matching the target area with the area tag, and determining an action intention recognition model corresponding to the target area.
Specifically, according to the first positioning information of the target wearable device, the target area where the target person is currently located is corresponding to the target area, and then the target area is matched with the corresponding area label, so that an action intention recognition model of the corresponding area is obtained. And inputting the environmental image time sequence data to be identified into the action intention identification model to identify and obtain the action intention of the target person.
S104, adjusting the first navigation route according to the action intention recognition result to obtain a second navigation route, and sending the second navigation route to the target vehicle.
Specifically, according to the action intention of the target person, the action route of the target person in a period of time can be predicted, and according to the action route, the target position of the target person in practice when the target vehicle reaches the position corresponding to the first positioning information can be predicted, so that the first navigation route can be adjusted in real time according to the target position, a second navigation route can be generated and pushed to the target vehicle, and the owner of the target vehicle can conveniently and rapidly find the target person.
Further, as an optional implementation manner, the step of adjusting the first navigation route according to the action intention recognition result to obtain the second navigation route specifically includes:
s1041, predicting and obtaining a movement route of a target person according to the movement intention recognition result and the first positioning information;
s1042, predicting a first running duration of a target vehicle according to a first navigation route, and predicting a target position of a target person according to the first running duration and an action route;
s1043, adjusting the navigation end point of the first navigation route according to the target position to obtain a second navigation route.
Specifically, the position corresponding to the first positioning information is taken as the action starting point of the target person, and after the action intention of the target person is determined, the action route of the target person can be predicted within a period of time (such as 1 minute); it should be noted that, since the acquisition of the environmental image time series data is continuously performed during the vehicle navigation, the action route of the target person can be predicted again in the next period according to the newly located action start point and the newly identified action intention.
According to the first navigation route and the current real-time road condition and vehicle condition, the driving time of the target vehicle to the position corresponding to the first positioning information can be predicted, and then the target position of the target person after the driving time is passed can be calculated by combining the action route obtained before and the average moving speed of the target person.
And adjusting the navigation end point of the first navigation route according to the predicted target position, and further obtaining a second navigation route through path planning again.
The method steps of the embodiments of the present invention are described above. It can be recognized that after the initial navigation route is generated according to the positioning information, the environment image shot by the target wearable device is acquired in real time and the time sequence data is formed, so that the action intention of the target person is identified, and the initial navigation route can be adjusted in real time according to the action intention, so that even if the initial positioning information of the target person deviates or the target person continuously moves, the target vehicle can dynamically adjust the driving route in real time in the driving process, the driving efficiency and the navigation accuracy of the vehicle are improved, and a vehicle owner can more conveniently and rapidly find the person carrying the wearable device.
Referring to fig. 2, an embodiment of the present invention provides a wearable device-based vehicle navigation system, including:
the navigation route generation module is used for acquiring first positioning information of the target wearable equipment and second positioning information of the target vehicle, generating a first navigation route according to the first positioning information and the second positioning information, and sending the first navigation route to the target vehicle;
The environment image acquisition module is used for acquiring environment image information shot by the target wearable equipment and determining environment image time sequence data according to the environment image information;
the action intention recognition module is used for determining a target area according to the first positioning information, acquiring an action intention recognition model of the target area trained in advance, and further inputting the environment image time sequence data into the action intention recognition model to obtain an action intention recognition result of a target person;
the navigation route adjustment module is used for adjusting the first navigation route according to the action intention recognition result to obtain a second navigation route and sending the second navigation route to the target vehicle;
the target personnel carry target wearable equipment, and the target wearable equipment and the target vehicle are mutually bound.
The content in the method embodiment is applicable to the system embodiment, the functions specifically realized by the system embodiment are the same as those of the method embodiment, and the achieved beneficial effects are the same as those of the method embodiment.
Referring to fig. 3, an embodiment of the present invention provides a vehicle navigation apparatus based on a wearable device, including:
at least one processor;
At least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement a wearable device-based vehicle navigation method as described above.
The content in the method embodiment is applicable to the embodiment of the device, and the functions specifically realized by the embodiment of the device are the same as those of the method embodiment, and the obtained beneficial effects are the same as those of the method embodiment.
The embodiment of the invention also provides a computer-readable storage medium in which a processor-executable program is stored, which when executed by a processor is used to perform the above-described wearable device-based vehicle navigation method.
The computer readable storage medium of the embodiment of the invention can execute the vehicle navigation method based on the wearable equipment, and can execute the steps of any combination of the embodiment of the method, thereby having the corresponding functions and beneficial effects of the method.
Embodiments of the present invention also disclose a computer program product or computer program comprising computer instructions stored in a computer readable storage medium. The computer instructions may be read from a computer-readable storage medium by a processor of a computer device, and executed by the processor, to cause the computer device to perform the method shown in fig. 1.
In some alternative embodiments, the functions/acts noted in the block diagrams may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Furthermore, the embodiments presented and described in the flowcharts of the present invention are provided by way of example in order to provide a more thorough understanding of the technology. The disclosed methods are not limited to the operations and logic flows presented herein. Alternative embodiments are contemplated in which the order of various operations is changed, and in which sub-operations described as part of a larger operation are performed independently.
Furthermore, while the present invention has been described in the context of functional modules, it should be appreciated that, unless otherwise indicated, one or more of the functions and/or features described above may be integrated in a single physical device and/or software module or one or more of the functions and/or features may be implemented in separate physical devices or software modules. It will also be appreciated that a detailed discussion of the actual implementation of each module is not necessary to an understanding of the present invention. Rather, the actual implementation of the various functional modules in the apparatus disclosed herein will be apparent to those skilled in the art from consideration of their attributes, functions and internal relationships. Accordingly, one of ordinary skill in the art can implement the invention as set forth in the claims without undue experimentation. It is also to be understood that the specific concepts disclosed are merely illustrative and are not intended to be limiting upon the scope of the invention, which is to be defined in the appended claims and their full scope of equivalents.
The above functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied in essence or a part contributing to the prior art or a part of the technical solution in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the above-described method of the various embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer-readable medium may even be paper or other suitable medium upon which the program described above is printed, as the program described above may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, may be implemented using any one or combination of the following techniques, as is well known in the art: discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having suitable combinational logic gates, programmable Gate Arrays (PGAs), field Programmable Gate Arrays (FPGAs), and the like.
In the foregoing description of the present specification, reference has been made to the terms "one embodiment/example", "another embodiment/example", "certain embodiments/examples", and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the present invention have been shown and described, it will be understood by those of ordinary skill in the art that: many changes, modifications, substitutions and variations may be made to the embodiments without departing from the spirit and principles of the invention, the scope of which is defined by the claims and their equivalents.
While the preferred embodiment of the present invention has been described in detail, the present invention is not limited to the above embodiments, and various equivalent modifications and substitutions can be made by those skilled in the art without departing from the spirit of the present invention, and these equivalent modifications and substitutions are intended to be included in the scope of the present invention as defined in the appended claims.

Claims (10)

1. A vehicle navigation method based on wearable equipment, which is characterized by comprising the following steps:
acquiring first positioning information of a target wearable device and second positioning information of a target vehicle, generating a first navigation route according to the first positioning information and the second positioning information, and sending the first navigation route to the target vehicle;
acquiring environment image information shot by the target wearable equipment, and determining environment image time sequence data according to the environment image information;
determining a target area according to the first positioning information, acquiring a pre-trained action intention recognition model of the target area, and further inputting the environment image time sequence data into the action intention recognition model to obtain an action intention recognition result of a target person;
adjusting the first navigation route according to the action intention recognition result to obtain a second navigation route, and sending the second navigation route to the target vehicle;
the target person carries the target wearable device, and the target wearable device and the target vehicle are mutually bound.
2. The method for navigating a vehicle based on a wearable device according to claim 1, wherein the step of acquiring first positioning information of a target wearable device and second positioning information of a target vehicle, generating a first navigation route according to the first positioning information and the second positioning information, and transmitting the first navigation route to the target vehicle specifically comprises:
Acquiring first positioning information of the target wearable device, second positioning information of the target vehicle and real-time road condition information through a cloud platform;
determining a navigation starting point position according to the second positioning information, and determining a navigation end point position according to the first positioning information;
and planning a path according to the navigation place position, the navigation end position and the real-time road condition information to obtain the first navigation route, and sending the first navigation route to the target vehicle through the cloud platform.
3. The method for navigating a vehicle based on a wearable device according to claim 1, wherein the step of acquiring the environmental image information captured by the target wearable device and determining the environmental image time series data according to the environmental image information specifically comprises:
invoking a camera of the target wearable device through a cloud platform, and acquiring the environment image information shot by the camera at a preset sampling frequency;
and sequencing the environmental image information according to the sampling time to obtain the environmental image time sequence data.
4. The wearable device-based vehicle navigation method of claim 1, further comprising the step of pre-training an action intention recognition model, comprising:
Respectively acquiring a plurality of environmental sample time sequence data shot by wearable equipment carried by a tester when the tester moves in a plurality of preset areas, and determining action intention labels of the environmental sample time sequence data according to the movement direction of the tester;
constructing a training sample set according to the environment sample time sequence data and the corresponding action intention labels;
inputting the training sample set into a pre-constructed bidirectional circulating neural network for training, and optimizing model parameters of the bidirectional circulating neural network to obtain a trained action intention recognition model;
determining the region labels of the action intention recognition models according to the preset regions to obtain the action intention recognition models of a plurality of different region labels;
the bidirectional circulating neural network comprises an input layer, a forward hiding layer, a reverse hiding layer and an output layer.
5. The method for navigating a vehicle based on a wearable device according to claim 4, wherein the step of inputting the training sample set into a pre-constructed bidirectional recurrent neural network for training, and optimizing model parameters of the bidirectional recurrent neural network to obtain a trained behavior intention recognition model specifically comprises the following steps:
Inputting the environmental sample time sequence data to the input layer, and calculating to obtain a first hidden state vector through the forward hidden layer;
inputting the time sequence data of the environmental samples into the input layer in a reverse order, and calculating to obtain a second hidden state vector through the reverse hidden layer;
performing reverse order processing on the second hidden state vector to obtain a third hidden state vector, and performing splicing processing on the first hidden state vector and the third hidden state vector to obtain a fourth hidden state vector;
inputting the fourth hidden state vector to the output layer, and outputting to obtain an action intention prediction result;
determining a loss value of the bidirectional recurrent neural network according to the action intention prediction result and the action intention label;
updating model parameters of the bidirectional circulating neural network according to the loss value, and returning to the step of inputting the environment sample time sequence data to the input layer;
and stopping training when the loss value reaches a preset first threshold value or the iteration number reaches a preset second threshold value, and obtaining the trained action intention recognition model.
6. The method for navigating a vehicle based on a wearable device according to claim 4, wherein the step of determining a target area according to the first positioning information and acquiring a pre-trained behavior intention recognition model of the target area comprises the following steps:
And determining a corresponding target area according to the first positioning information, matching the target area with the area tag, and determining the action intention recognition model corresponding to the target area.
7. The method for navigating a vehicle based on a wearable device according to any one of claims 1 to 6, wherein the step of adjusting the first navigation route according to the action intention recognition result to obtain a second navigation route specifically comprises:
predicting a movement route of the target person according to the movement intention recognition result and the first positioning information;
predicting a first running duration of the target vehicle according to the first navigation route, and predicting a target position of the target person according to the first running duration and the action route;
and adjusting the navigation end point of the first navigation route according to the target position to obtain the second navigation route.
8. A wearable device-based vehicle navigation system, comprising:
the navigation route generation module is used for acquiring first positioning information of a target wearable device and second positioning information of a target vehicle, generating a first navigation route according to the first positioning information and the second positioning information, and sending the first navigation route to the target vehicle;
The environment image acquisition module is used for acquiring environment image information shot by the target wearable equipment and determining environment image time sequence data according to the environment image information;
the action intention recognition module is used for determining a target area according to the first positioning information, acquiring an action intention recognition model of the target area trained in advance, and further inputting the environment image time sequence data into the action intention recognition model to obtain an action intention recognition result of a target person;
the navigation route adjustment module is used for adjusting the first navigation route according to the action intention recognition result to obtain a second navigation route and sending the second navigation route to the target vehicle;
the target person carries the target wearable device, and the target wearable device and the target vehicle are mutually bound.
9. A wearable device-based vehicle navigation apparatus, comprising:
at least one processor;
at least one memory for storing at least one program;
when the at least one program is executed by the at least one processor, the at least one processor is caused to implement a wearable device-based vehicle navigation method as claimed in any one of claims 1 to 7.
10. A computer-readable storage medium in which a processor-executable program is stored, characterized in that the processor-executable program, when executed by a processor, is for performing a wearable device-based vehicle navigation method as claimed in any one of claims 1 to 7.
CN202310395563.4A 2023-04-13 2023-04-13 Vehicle navigation method, system, device and storage medium based on wearable equipment Pending CN116399347A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310395563.4A CN116399347A (en) 2023-04-13 2023-04-13 Vehicle navigation method, system, device and storage medium based on wearable equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310395563.4A CN116399347A (en) 2023-04-13 2023-04-13 Vehicle navigation method, system, device and storage medium based on wearable equipment

Publications (1)

Publication Number Publication Date
CN116399347A true CN116399347A (en) 2023-07-07

Family

ID=87007204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310395563.4A Pending CN116399347A (en) 2023-04-13 2023-04-13 Vehicle navigation method, system, device and storage medium based on wearable equipment

Country Status (1)

Country Link
CN (1) CN116399347A (en)

Similar Documents

Publication Publication Date Title
CN108509411B (en) Semantic analysis method and device
US20220139070A1 (en) Learning apparatus, estimation apparatus, data generation apparatus, learning method, and computer-readable storage medium storing a learning program
JP6147691B2 (en) Parking space guidance system, parking space guidance method, and program
CN109299732B (en) Unmanned driving behavior decision and model training method and device and electronic equipment
CN112115372B (en) Parking lot recommendation method and device
CN117079299B (en) Data processing method, device, electronic equipment and storage medium
CN112101114B (en) Video target detection method, device, equipment and storage medium
CN115082752A (en) Target detection model training method, device, equipment and medium based on weak supervision
CN111401255B (en) Method and device for identifying bifurcation junctions
DE112022002622T5 (en) WEAKENING ENEMY ATTACKS TO SIMULTANEOUSLY PREDICT AND OPTIMIZE MODELS
CN114842546A (en) Action counting method, device, equipment and storage medium
CN113970338A (en) Travel mode recommendation method, related method, device and system
CN111144567A (en) Training method and device of neural network model
CN116399347A (en) Vehicle navigation method, system, device and storage medium based on wearable equipment
CN115735233A (en) Training method of object detection model, object detection method and device
CN110853364B (en) Data monitoring method and device
CN111428858A (en) Method and device for determining number of samples, electronic equipment and storage medium
CN113111729B (en) Training method, recognition method, system, device and medium for personnel recognition model
Ge et al. Deep reinforcement learning navigation via decision transformer in autonomous driving
CN115171066A (en) Method, device and equipment for determining perception risk and storage medium
CN114792320A (en) Trajectory prediction method, trajectory prediction device and electronic equipment
CN112786016B (en) Voice recognition method, device, medium and equipment
CN114240992A (en) Method and system for labeling target object in frame sequence
CN116972837B (en) Self-adaptive vehicle-mounted combined navigation positioning method and related equipment
CN117593708B (en) Traffic digital twin method, equipment and storage medium containing vehicle identity information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination