CN117213513A - Pedestrian navigation system and path planning method based on environment perception and human kinematics - Google Patents

Pedestrian navigation system and path planning method based on environment perception and human kinematics Download PDF

Info

Publication number
CN117213513A
CN117213513A CN202311123710.9A CN202311123710A CN117213513A CN 117213513 A CN117213513 A CN 117213513A CN 202311123710 A CN202311123710 A CN 202311123710A CN 117213513 A CN117213513 A CN 117213513A
Authority
CN
China
Prior art keywords
environment
information
navigation system
gait
map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311123710.9A
Other languages
Chinese (zh)
Inventor
周慧
滕洪璟
那清权
桂梦凡
袁海磊
唐晟铮
翟睿雯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202311123710.9A priority Critical patent/CN117213513A/en
Publication of CN117213513A publication Critical patent/CN117213513A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The application discloses a pedestrian navigation system and a path planning method based on environment perception and human kinematics, wherein the pedestrian navigation system comprises: the system comprises an environment information acquisition module, a positioning module, a gait planning module and a motion planning module. The path planning method comprises the following steps: and establishing an European character distance field map through environment perception, establishing a continuous motion model by adopting a linear inverted pendulum model, generating gait points by utilizing linear gait point placement control, solving an LIP model, generating a safe track through a search algorithm, and guiding pedestrians to walk according to the optimal gait track. The pedestrian navigation system and the pedestrian navigation method can sense abundant environment information, process dynamic information in the environment, consider the kinematic constraint of human bodies, plan safe barrier-free walking tracks suitable for passing special people such as vision barriers, intellectual barriers and the like, and realize autonomous and safe guidance of pedestrians.

Description

Pedestrian navigation system and path planning method based on environment perception and human kinematics
Technical Field
The application relates to the technical field of pedestrian positioning and autonomous navigation, in particular to a pedestrian navigation system and a path planning method based on environment perception and human kinematics.
Background
For pedestrians with vision impairment, the problem of difficult environmental perception seriously affects the normal life of the pedestrians, the vision impairment people need tools capable of sensing the surrounding environment, but most blind guiding tools used by the vision impairment people are simple equipment such as walking sticks with simple functions, the current society develops rapidly, the simple walking sticks cannot adapt to the current complex environment, and the biological blind guiding mode represented by the blind guiding dogs cannot be widely popularized all the time under the influence of high training cost, long training period, service life and the like.
Alzheimer's disease is manifested by gradually serious cognitive impairment (memory impairment, learning impairment, attention impairment, spatial cognitive function, impairment of problem solving ability), and the occurrence of these symptoms causes difficulty in positioning and navigation of patients, and is prone to losing. The phenomenon of easy missing occurs. And because the symptoms of the disease are not obvious, the rescuer has difficulty in distinguishing the state and the disease condition of the patient, so that effective rescue cannot be realized.
Based on the problems, with the current technology of intelligent equipment becoming more and more mature, an autonomous navigation system based on environmental perception and human kinematic constraint is a better choice for solving the trip and daily life of special people such as vision impairment, intellectual impairment and the like.
The intelligent navigation system equipment with relatively perfect functions at present can be divided into wearable navigation equipment, handheld navigation equipment, a navigation system based on an intelligent terminal, a mobile navigation robot and the like. The intelligent navigation system is added with various sensors and computing platforms based on the traditional guidance system, provides road surface information for pedestrians, but cannot provide specific navigation information. The wearable navigation device is assembled on equipment such as jackets, glasses, backpacks and shoes of pedestrians, and the pedestrians are tired due to the fact that the pedestrians are made to generate directivity by means of voice prompts of left and right ears, but the traction feeling of the entity cannot be given and the quality is too large. The hand-held navigator generates vibration at the wrist and the thumb of the pedestrian to represent two distances, the closer the distance is, the stronger the vibration is, the pedestrian can avoid the obstacle by continuously rotating the wrist to scan the surrounding environment, but the influence of fertility is larger.
The precondition that the autonomous navigation system can adapt to complex unknown environments is that the system has the capability of environment sensing and exploration, the current navigation system is still in a relatively primary stage for sensing the environment, and rich environment and dynamic information cannot be processed. In addition, the visually impaired people need to travel in the environment with dynamic changes of dense people, the path planning capability is also a precondition for realizing autonomous guidance of pedestrians, and the current navigation system or equipment does not have the capability of fast and stable path planning.
Disclosure of Invention
In order to solve the problems, the application provides a pedestrian navigation system and a path planning method based on environment perception and human kinematics, which can process the abundant environment information perceived, process the dynamic information in the environment, consider the kinematic constraint of human bodies, plan a safe barrier-free walking scheme suitable for the passing of visually impaired people and realize the autonomous and safe guidance of pedestrians.
In order to achieve the above purpose, the method adopts the following technical scheme: in a first aspect, the present application provides a pedestrian navigation system based on environmental awareness and human kinematics, comprising:
the environment information acquisition module is used for acquiring three-dimensional information of the environment;
the environment sensing module is used for semantically modeling the environment according to the three-dimensional information to form a composite environment map;
the positioning module is used for obtaining the current pose of the pedestrian and the system;
the gait planning module is used for analyzing by using the environment composite map to obtain a proper walking path scheme and selecting an optimal scheme from all schemes;
and the motion planning module is used for tracking and controlling the track obtained by the gait planning module and guiding pedestrians to walk.
In a second aspect, the present application provides a path planning method based on environmental awareness and human kinematics, comprising the steps of:
establishing an European character number distance field map through environment perception;
a continuous motion model is established by adopting a linear inverted pendulum model;
generating gait points by using linear gait point placement control and solving an LIP model;
a safe trajectory is generated by a search algorithm.
In a third aspect, the present application provides a navigation system device, including a mechanical mechanism of the device, and an environmental information acquisition module and a positioning module disposed on the device, so as to meet the requirements of an intelligent navigation system, and the navigation method includes the steps described in the foregoing.
In a fourth aspect, the application provides an electronic device comprising a memory and a processor and computer instructions stored on the memory and running on the processor, which when executed by the processor, perform the steps of the path planning method of the second aspect.
Compared with the prior art, the application has the beneficial effects that:
(1) The intelligent walking system can plan the functions of walking scheme, intelligent voice, vibration reminding and the like in a complex environment through the autonomous perception of the system to the environment, help special people meet the safe passing demands of the domestic environment and the outdoor environment, can autonomously guide the pedestrians to walk, and improve the autonomous living ability of the special people such as vision impairment, intelligence impairment and the like.
(2) The application can carry out path planning in a dynamic environment, so that pedestrians can walk in the dynamic change environment of dense crowds, and an optimal walking track can be rapidly and stably planned, and autonomous guidance of the pedestrians can be realized through a navigation method of the pedestrians under the constraints of a human body kinematic model, the dynamic environment, obstacles and the like.
(3) The intelligent voice and vibration interaction module has the intelligent voice and vibration interaction function, and can receive the service instruction of the pedestrian and make corresponding feedback. When encountering dangerous scenes, the system reminds pedestrians to pay attention to safety through intelligent voice and strong and weak prompt of vibration. In order to meet the requirements of smooth service, the voice interaction module can complete functions such as accurate voice recognition, multi-round voice dialogue and the like.
(4) The application adopts environment perception and considers the navigation mode of the human body dynamic model, thereby greatly reducing the probability of injury of pedestrians and obviously improving the efficiency and the safety of walking indoors and outdoors of special people such as vision impairment, intellectual impairment and the like.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application.
Fig. 1 is an overall system block diagram of a multi-sensor-based intelligent navigation system according to embodiment 1 of the present application.
Fig. 2 is a planning flowchart of a navigation method based on an intelligent navigation system according to embodiment 2 of the present application.
Fig. 3 is a schematic diagram of an intelligent navigation system device according to embodiment 3 of the present application.
Detailed Description
In order to make the technical solution, objects and advantages of the present application more apparent, the present application will be described in detail hereinafter with reference to the accompanying drawings, which form a part hereof, and together with the embodiments of the present application serve to illustrate the principles of the present application, and not to limit the scope of the application.
Example 1
The application discloses an intelligent navigation system based on multiple sensors, the general design diagram of the system is shown in figure 1, and the system specifically comprises:
the environment information acquisition module is used for acquiring three-dimensional point cloud information of the environment;
the environment sensing module performs semantic modeling according to the three-dimensional point cloud information observation environment acquired by the environment information acquisition module to form an incremental grid map, each grid in the incremental grid map is provided with semantic information, a region which can pass through in parallel in the environment is obtained according to the semantic information of the map, dynamic barriers are considered, and the motion information of the dynamic barriers is added in a base layer and updated in real time to form an environment composite map. Finally, compressing the three-dimensional map to a bird's eye view angle through an European character distance field for path gait planning; the environment information acquisition module comprises an RGBD depth camera and an RGB camera, and three-dimensional information of the environment is obtained through visual data and depth images acquired by the cameras.
The positioning module is used for fusing the output result of each module through visual motion estimation, inertial navigation motion estimation and a satellite navigation system to obtain the current pose of a pedestrian and a system;
the gait planning module is used for analyzing by using the environment composite map to obtain a proper walking path scheme, and selecting an optimal global path from all schemes, and the specific method execution flow is described in detail in the example 2.
And the motion planning module is used for tracking and controlling the track obtained by the gait planning module to guide the pedestrians to walk.
Further, the intelligent navigation system based on multiple sensors disclosed in this embodiment further provides a voice vibration interaction module, where the voice interaction module includes a microphone array, a voice player, a vibration motor and a voice processing module.
The microphone array, the voice player and the vibration motor are positioned at the top of the navigation system equipment, an interface for man-machine voice touch interaction is provided, voice information of a user is received, a voice reminding instruction generated in the motion is played, and voice reminding is achieved.
The voice processing module is used for recognizing voice information received by the microphone array, further appointing a specific navigation task according to a voice recognition result, in addition, the voice processing module is used for playing a voice reminding instruction generated by the perceived risk factors through the voice player and reminding pedestrians through vibration generated by the vibration motor, so that walking safety of special people such as vision impairment, intellectual impairment and the like is improved.
And determining the risk factors in the movement process through the semantic information of the environment acquired by the environment information acquisition module and the risk information predicted by the dynamic obstacle.
The navigation system of the embodiment can plan the functions of walking schemes, force feedback mechanisms, intelligent voice, vibration reminding and the like in a complex environment through autonomous perception of the system to the environment, helps special people such as vision impairment, intellectual impairment and the like to meet the safe passing demands of the domestic environment and the outdoor environment, can autonomously guide pedestrians to walk, and improves the autonomous living capacity of the special people such as the vision impairment, the intellectual impairment and the like.
The navigation system of the embodiment can carry out path planning in a dynamic environment, so that pedestrians can pass through the dynamic change environment of dense crowds, an optimal walking track can be rapidly and stably planned, and autonomous guidance of the pedestrians can be realized through a self navigation method under the constraints of an anthropometric model, the dynamic environment, obstacles and the like.
The navigation system of the embodiment has intelligent voice and vibration interaction function, and the voice and vibration interaction module can receive the service instruction of the pedestrian and make corresponding feedback. When encountering dangerous scenes, the system reminds pedestrians to pay attention to safety through intelligent voice and strong and weak prompt of vibration. In order to meet the requirements of smooth service, the voice interaction module can complete functions such as accurate voice recognition, multi-round voice dialogue and the like.
The navigation system of the embodiment adopts environment perception, considers the navigation mode of the human body dynamic model, greatly reduces the probability of injury of special people such as vision impairment, mental retardation and the like, and remarkably improves the efficiency and the safety of indoor and outdoor walking of the special people such as vision impairment, mental retardation and the like.
Example 2
In this embodiment, an environment-aware and ergonomically constrained gait generating algorithm is disclosed, and the flow chart is shown in fig. 2, and the steps of the method include:
step 1: establishing an European character number distance field (ESDF) map through environment perception;
firstly, based on a current walking scene, according to visual information and an inertial sensor provided by an RGBD depth camera, an RGB camera and an Inertial Measurement Unit (IMU), a three-dimensional point cloud information observation environment obtained by a module forms an incremental grid map, a region which can pass through in the environment is obtained according to occupation information of the map, and motion prediction information of a dynamic obstacle is added into a feasible map by considering the dynamic obstacle and updated in real time to form an environment composite map. Finally, compressing the three-dimensional map to a bird's eye view angle through an European character number distance field (ESDF).
Firstly, an occupied grid map is required to be constructed by using three-dimensional point cloud data detected by a sensor, but because the error of the sensor often affects the generation of wrong obstacle information, a probability occupied grid map is often used, in which for a point, the probability is used for representing the probability that the point is in a Free state, the probability is used for representing the probability that the point is in a grid occupied state, and during movement, the point cloud information read by the sensor is accumulated to update voxels of the state of the probability occupied grid map. However, for planning and obstacle avoidance, the occupation information in the map is insufficient, and information such as obstacle distance, direction and the like is also required. Euclidean Symbol Distance Field (ESDF) is very useful for online motion planning for navigation because it can easily query distance and gradient information of obstacles. It generates the ESDF by grid occupancy map and then calculates the ESDF value for each voxel using a ray-casting algorithm.
Step 2: a continuous motion model is established by adopting an LIP model (LIPM, linear inverted pendulum model);
the basic principle of the LIP model is as follows: the linear inverted pendulum consists of one highly fixed mass and two legs of negligible mass, in which model the mass moves at a constant speed in the horizontal direction, while the length of the legs can be adjusted according to a control strategy. This condition is called leg switching, which is a transient event that does not affect particle velocity. It is assumed that there is no dual support phase during the switchover and no foot slipping phenomenon. By mathematical description: in one step of the continuous phase, the system follows the dynamics of an inverted pendulum, and the equation can be established:
wherein g is gravity acceleration, x is a component in the abscissa direction, and h is the body constitution and heart height of a person. The equation can be solved as follows:
wherein t is the time in which,
the LIP model can be extended to a three-dimensional case because the two-axis model can be decoupled, the model can be written as follows:
step 3: generating gait points and solving an LIP model by using linear gait point placement control (LFPC, linear foot placement control);
the body position of the next step is calculated from the known controller parameters and the speed at which the body descends, and the position at which the next foot lands is predicted from a linear function, which is related to the body speed. The landing points with the x axis and the y axis as directions can be respectively designed:
wherein,representing the placement of the legs in the x-axis direction,/->Representing the placement position of the legs in the y-axis direction, a w 、a l And b is a controller parameter, v x 、v y Is the velocity component in the x, y direction.
When considering the walking direction, the walking angle and the positive x-axis direction are set to be theta, and the walking gait can be composed of the triplet (d l ,d w θ), where d l Represent step size, d w Representing the step width. Therefore, the original controller needs to be modified by using the rotation matrix, and the modified controller is as follows:
in this controller we can select the desired step period T and select b as a parameter. The above becomes a three-parameter (a l ,a w θ) controller, each parameter determining (d) l ,d w θ), where d l Represent step size, d w Representing the step width, θ represents the angle between the walking direction and the positive x-axis direction. Through the controller, we can conveniently adjust the step length, step width and walking direction of each step of walking.
Step 4: generating a safe track through a search algorithm;
and determining a starting point and a target point in the ESDF map according to the current starting point and the target point of the human body. Based on the initial point pose, the target point pose and the obstacle information in the current driving scene, the motion primitive is expanded through the LFPC, and a guiding path is established by adopting a search algorithm of human body kinematics constraint.
Starting from a starting point based on the established guiding path, and circularly expanding the effective neighbor nodes corresponding to the starting point based on the LFPCs. Determining father node in the effective nodes to be expanded according to rules, expanding the effective neighbor node corresponding to the father node again, and the like, circularly expanding the neighbor nodes, wherein after determining father node each time, calculating human body centroid track by using LFPC as a new node for outward expansion, adding the new node into the set of the nodes to be expanded, determining path length by calculating cost function of the LFPC track, circularly until the shortest effective path exists between the current effective neighbor node and the target point, determining path planning success, outputting all path states (paths on X-Y-YAW plane) generated by LFPC from the starting point to the target point, taking the planned path as global reference path,
and performing collision detection on the target initial position and the target position, namely determining whether the initial position and the target position have obstacles to cause the pedestrian to be unable to walk or not, and obtaining a collision detection result. And if the collision detection result shows that the starting point pose and the target point pose have no collision condition, determining that the starting point pose and the target point pose are effective.
Example 3
In the embodiment, a navigation system device is disclosed, which comprises a mechanical mechanism of the device, an environment information acquisition module and a positioning module which are arranged on the device, and the steps of the navigation method are satisfied. The method comprises the following steps:
as shown in fig. 3, the navigation system device mainly includes a loading platform, an environmental information acquisition module and a positioning module, which are disposed on the device.
Wherein, loading platform comprises 3D printing structure.
The environment information acquisition module comprises a depth camera 1, wherein the depth camera is positioned at the head of the loading platform, a screw penetrates through a through hole in the center of the front loading plate and is connected with a threaded hole at the bottom of the depth camera, and the depth camera is fixed on the front loading plate and is used for acquiring image information of the environment;
the positioning module comprises a satellite navigation system and an inertial measurement unit, wherein the inertial measurement unit is used for acquiring the conditions of the speed, the acceleration and the direction of the motion of the device and can be used for positioning. The satellite navigation system is used for acquiring the positioning of the device;
and a high performance edge computing platform 2, and the electronic device shown in example 4.
Example 4
In this embodiment, an electronic device is disclosed that includes a memory and a processor, and computer instructions stored on the memory and running on the processor, to perform the steps described in the intelligent navigation system based navigation method disclosed in embodiment 2.
These computer program instructions may be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart or flows of embodiment 2.
The above examples are only specific embodiments of the present application for illustrating the technical solution of the present application, but not for limiting the scope of the present application, and although the present application has been described in detail with reference to the foregoing examples, it will be understood by those skilled in the art that the present application is not limited thereto; any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A pedestrian navigation system based on environmental awareness and human kinematics, comprising:
the environment information acquisition module is used for acquiring three-dimensional information of the environment;
the environment sensing module is used for semantically modeling the environment according to the three-dimensional information to form a composite environment map;
the positioning module is used for obtaining the current pose of the pedestrian and the system;
the gait planning module is used for analyzing by using the environment composite map to obtain a proper walking path scheme and selecting an optimal scheme from all schemes;
and the motion planning module is used for tracking and controlling the track obtained by the gait planning module and guiding pedestrians to walk.
2. The pedestrian navigation system based on environment awareness and human kinematics of claim 1 further comprising a voice vibration interaction module configured to obtain voice information, identify the voice information, and determine a navigation task of the system based on a result of identifying the voice information.
3. The pedestrian navigation system based on environmental awareness and human kinematics of claim 2 wherein the voice vibration interaction module is further configured to generate voice and vibrate for alerting a pedestrian to the hazard information present during navigation.
4. A path planning method based on the pedestrian navigation system as set forth in claim 1, comprising the steps of:
establishing an European character number distance field map through environment perception;
a continuous motion model is established by adopting a linear inverted pendulum model;
generating gait points by using linear gait point placement control and solving an LIP model;
a safe trajectory is generated by a search algorithm.
5. The method of claim 4, wherein the creating of the euclidean symbol distance field map by context awareness is performed as follows:
forming an incremental grid map according to visual information and inertial information provided by an RGBD depth camera, an RGB camera and an inertial measurement unit and the three-dimensional point cloud information observation environment acquired by an environment information acquisition module, obtaining a passable area in the environment according to occupation information of the map, adding motion prediction information of the dynamic obstacle into the feasible map by considering the dynamic obstacle, and updating in real time to form an environment composite map; and finally, compressing the three-dimensional map to a bird's eye view angle through the European character distance field.
6. The method of claim 5, wherein the LIP model is used to model continuous motion, comprising:
in one step of the continuous phase, the system follows the dynamics of an inverted pendulum, creating the equation:
wherein g is gravity acceleration, x is a component in the abscissa direction, and h is the body constitution and heart height of a human body; the equation yields the following solution:
wherein t is the time in which,
the LIP model is expanded to a three-dimensional state, and the model is written as the following formula:
7. the method of claim 6, wherein the gait points are generated and the LIP model is solved using LFPC, in particular as follows:
predicting the position of the next foot landing using a linear function based on the known controller parameters and the speed of the body as it descends, the function being related to the body speed; respectively designing landing points in the directions of an x axis and a y axis:
wherein,representing the placement of the legs in the x-axis direction,/->Representing the placement position of the legs in the y-axis direction, a w 、a l And b is a controller parameter, v x 、v y Is a velocity component in the x, y directions;
when considering the walking direction, the walking gait is composed of a triplet (d l ,d w θ), where d l Represent step size, d w Representing the step width, modifying the controller by using the rotation matrix, wherein the modified controller is as follows:
in the controller, selecting a desired step period T and selecting b as a parameter; the above becomes a three-parameter (a l ,a w θ) controller, each parameter determining (d) l ,d w Corresponding gait parameters in θ); the step length, the step width and the walking direction of each walking step are adjusted through the controller.
8. The method of claim 7, wherein the secure trajectory is generated by a search algorithm, in particular by:
generating a safe track through a search algorithm, and determining a starting point and a target point in an ESDF map according to the current starting point and the target point of a human body; based on the initial point pose, the target point pose and the barrier information in the current driving scene, expanding the motion primitive through the LFPC, and establishing a guide path by adopting a search algorithm of human body kinematics constraint;
and taking the planning path as a global reference path, wherein the global reference path is used for providing reference for the pedestrian walking guiding process.
9. A pedestrian navigation device based on the pedestrian navigation system of claim 1, comprising a mechanical mechanism of the device, an environmental information acquisition module and a positioning module arranged on the device.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 5-8 when the program is executed.
CN202311123710.9A 2023-08-31 2023-08-31 Pedestrian navigation system and path planning method based on environment perception and human kinematics Pending CN117213513A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311123710.9A CN117213513A (en) 2023-08-31 2023-08-31 Pedestrian navigation system and path planning method based on environment perception and human kinematics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311123710.9A CN117213513A (en) 2023-08-31 2023-08-31 Pedestrian navigation system and path planning method based on environment perception and human kinematics

Publications (1)

Publication Number Publication Date
CN117213513A true CN117213513A (en) 2023-12-12

Family

ID=89050358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311123710.9A Pending CN117213513A (en) 2023-08-31 2023-08-31 Pedestrian navigation system and path planning method based on environment perception and human kinematics

Country Status (1)

Country Link
CN (1) CN117213513A (en)

Similar Documents

Publication Publication Date Title
Li et al. Vision-based mobile indoor assistive navigation aid for blind people
US10974389B2 (en) Methods and apparatus for early sensory integration and robust acquisition of real world knowledge
CN108303972B (en) Interaction method and device of mobile robot
JP3945279B2 (en) Obstacle recognition apparatus, obstacle recognition method, obstacle recognition program, and mobile robot apparatus
Li et al. ISANA: wearable context-aware indoor assistive navigation with obstacle avoidance for the blind
KR101121763B1 (en) Apparatus and method for recognizing environment
CN109959377A (en) A kind of robot navigation's positioning system and method
Bourbakis et al. An intelligent assistant for navigation of visually impaired people
Schwarze et al. An intuitive mobility aid for visually impaired people based on stereo vision
Zhu et al. Deep learning for embodied vision navigation: A survey
Wang et al. A survey of 17 indoor travel assistance systems for blind and visually impaired people
Lorch et al. ViGVVaM—An Emulation Environment for a Vision Guided Virtual Walking Machine
Stasse et al. Integrating walking and vision to increase humanoid autonomy
CN117213513A (en) Pedestrian navigation system and path planning method based on environment perception and human kinematics
Chen Extracting cognition out of images for the purpose of autonomous driving
US20230266140A1 (en) Mobility assistance device and method of providing mobility assistance
CN109858090B (en) Public building guiding system design method based on dynamic vision field
Fathi et al. Augmented Reality for the Visually Impaired: Navigation Aid and Scene Semantics for Indoor Use Cases
Dickmanns Three-stage visual perception for vertebrate-type dynamic machine vision
Stricker et al. R2D2 reloaded: Dynamic video projection on a mobile service robot
Mohades Kasaei et al. Design and implementation of a fully autonomous humanoid soccer robot
Na et al. Improving Walking Path Generation through Biped Constraint in Indoor Navigation System for Visually Impaired Individuals
Saka Real-time Autonomous Robot Navigation System with Collision Avoidance for NAO Robot
Ramane et al. Autonomous Racing Vehicle
Stasse Vision based motion generation for humanoid robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination