CN116659518B - Autonomous navigation method, device, terminal and medium for intelligent wheelchair - Google Patents

Autonomous navigation method, device, terminal and medium for intelligent wheelchair Download PDF

Info

Publication number
CN116659518B
CN116659518B CN202310949830.8A CN202310949830A CN116659518B CN 116659518 B CN116659518 B CN 116659518B CN 202310949830 A CN202310949830 A CN 202310949830A CN 116659518 B CN116659518 B CN 116659518B
Authority
CN
China
Prior art keywords
wheelchair
node
point set
type
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310949830.8A
Other languages
Chinese (zh)
Other versions
CN116659518A (en
Inventor
胡方扬
魏彦兆
唐海波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaozhou Technology Co ltd
Original Assignee
Xiaozhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaozhou Technology Co ltd filed Critical Xiaozhou Technology Co ltd
Priority to CN202310949830.8A priority Critical patent/CN116659518B/en
Publication of CN116659518A publication Critical patent/CN116659518A/en
Application granted granted Critical
Publication of CN116659518B publication Critical patent/CN116659518B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • G01C21/206Instruments for performing navigational calculations specially adapted for indoor navigation

Abstract

The invention discloses an intelligent wheelchair autonomous navigation method, device, terminal and medium. The type of the observation node is judged according to the deviation data, and the wheelchair is controlled to move according to different observation node types by adopting different methods, so that the problem of environmental adaptability can be solved, path deviation and positioning failure caused by environmental change are avoided, the judgment on the environmental change and the event is automatically completed, the safety and the reliability in autonomous navigation are improved, and the requirements of high precision and high safety of the intelligent wheelchair are met.

Description

Autonomous navigation method, device, terminal and medium for intelligent wheelchair
Technical Field
The invention relates to the technical field of intelligent control, in particular to an intelligent wheelchair autonomous navigation method, device, terminal and medium.
Background
The intelligent wheelchair realizes autonomous navigation mainly through machine vision, SLAM (Simultaneous Localization And Mapping: simultaneous localization and mapping) and path planning technologies.
The existing intelligent wheelchair autonomous navigation method mainly focuses on path planning and obstacle avoidance, and when illumination is poor or the environment is complex, certain mismatching and positioning errors can occur to machine vision; the positioning accuracy of SLAM technology is also easily influenced by sensor accuracy and environmental texture, and SLAM is difficult to update rapidly when the environment changes, and certain positioning errors exist. The positioning error will cause the wheelchair to deviate from the navigation path or the wheelchair orientation to deviate from the navigation path. Therefore, the existing wheelchair cannot adapt to environmental changes during autonomous navigation, so that the wheelchair deviates, and the safety and reliability during autonomous navigation are affected.
Accordingly, there is a need for improvement and advancement in the art.
Disclosure of Invention
The invention mainly aims to provide an intelligent wheelchair autonomous navigation method, an intelligent wheelchair autonomous navigation device, an intelligent terminal and a computer readable storage medium, which solve the problems that the wheelchair deviates due to the fact that environment changes cannot be adapted during autonomous navigation of the wheelchair, and safety and reliability during autonomous navigation are affected.
To achieve the above object, a first aspect of the present invention provides an intelligent wheelchair autonomous navigation method, including:
planning a navigation path to a destination based on the space environment of the wheelchair, and arranging a plurality of observation nodes on the navigation path to control the wheelchair to move along the navigation path;
When the wheelchair reaches the observation node:
acquiring a node image, and extracting a characteristic point set of a wheelchair in the node image;
obtaining deviation data of the wheelchair according to the characteristic point set, and judging a first type of the observation node according to the deviation data, wherein the deviation data are used for measuring the deviation degree of the wheelchair, and the first type comprises the following steps: normal node, deviation-rectifying node, node needing reversing and node needing rotating;
when the first type is a normal node, continuing to control the wheelchair to move along the navigation path; when the first type is a correctable node, correcting the wheelchair and then continuously controlling the wheelchair to move along the navigation path; and when the first type is a node needing reversing or a node needing rotating, acquiring a control measure corresponding to the first type, and adopting the control measure to control the wheelchair to move.
Optionally, the extracting a feature point set of the wheelchair in the node image includes:
detecting a wheelchair in the node image by adopting an image detection model to obtain a boundary frame of the wheelchair;
and extracting the feature point set in the boundary box by adopting a feature point extraction algorithm.
Optionally, the extracting the feature point set in the bounding box by using a feature point extraction algorithm includes:
Extracting feature points in the boundary frame by adopting a scale-invariant feature transformation algorithm to obtain a central feature point set;
and searching nearby feature points by using the AKAZE algorithm by taking the feature points in the central feature point set as the center to obtain the feature point set.
Optionally, obtaining deviation data of the wheelchair according to the feature point set, and determining the first type of the observation node according to the deviation data includes:
acquiring a characteristic point set of a previous observation node;
obtaining a characteristic point matching degree and a characteristic point distribution change degree according to the characteristic point set and the characteristic point set of the previous observation node;
based on the three-dimensional coordinates of the feature points, projecting the feature points in the feature point set to a three-dimensional model of a pre-constructed space environment, calculating the projection error of each feature point, and counting all the projection errors to obtain a space projection error;
and obtaining the first type of the observation node according to the feature point matching degree, the feature point distribution change degree and the space projection error.
Optionally, the projecting the feature points in the feature point set onto a three-dimensional model of a pre-constructed spatial environment based on the three-dimensional coordinates of the feature points, calculating a projection error of each feature point, and counting all the projection errors to obtain a spatial projection error, including:
Based on the three-dimensional coordinates of the feature points, projecting the feature points in the feature point set onto the three-dimensional model to obtain a projection point set;
searching the nearest point of the projection points in the projection point set on the three-dimensional model;
calculating the distance between each projection point in the projection point set and the corresponding nearest point to obtain the projection error of the feature point;
and counting the projection errors of all the characteristic points to obtain the space projection errors.
Optionally, the acquiring the control measure corresponding to the first type, and adopting the control measure to control the wheelchair to move includes:
acquiring control measures corresponding to the first type;
displaying the node image to a wheelchair user, and acquiring an electroencephalogram signal of the wheelchair user;
inputting the electroencephalogram signals into a trained network model to obtain control intention, and judging a second type of the observation node according to the control intention;
when the first type and the second type are the same, adopting the control measure to control the wheelchair to move;
otherwise, when the confirmation instruction of the wheelchair user agreeing to execute the control measure is acquired, the control measure is adopted to control the wheelchair to move, and when the confirmation instruction is not acquired, the wheelchair is controlled to pause moving.
Optionally, training the network model includes:
training the network model to obtain a basic model;
and pruning and compressing the basic model to obtain the trained network model.
The second aspect of the present invention provides an intelligent wheelchair autonomous navigation apparatus, wherein the apparatus comprises:
the autonomous control module is used for planning a navigation path to a destination based on the space environment of the wheelchair, and controlling the wheelchair to move along the navigation path;
the observation node module is used for setting a plurality of observation nodes on the navigation path;
the characteristic point set module is used for acquiring a node image and extracting a characteristic point set of the wheelchair in the node image;
the observation node type module is used for obtaining deviation data of the wheelchair according to the characteristic point set, judging a first type of the observation node according to the deviation data, wherein the deviation data are used for measuring the deviation degree of the wheelchair, and the first type comprises the following steps: normal node, deviation-rectifying node, node needing reversing and node needing rotating;
the deviation correction module is used for continuously controlling the wheelchair to move along the navigation path when the first type is a normal node; when the first type is a correctable node, correcting the wheelchair and then continuously controlling the wheelchair to move along the navigation path; and when the first type is a node needing reversing or a node needing rotating, acquiring a control measure corresponding to the first type, and adopting the control measure to control the wheelchair to move.
A third aspect of the present invention provides an intelligent wheelchair including a memory, a processor, and an intelligent wheelchair autonomous navigation program stored in the memory and operable on the processor, the intelligent wheelchair autonomous navigation program implementing any one of the steps of the intelligent wheelchair autonomous navigation method when executed by the processor.
A fourth aspect of the present invention provides a computer-readable storage medium, on which an intelligent wheelchair autonomous navigation program is stored, the intelligent wheelchair autonomous navigation program implementing the steps of any one of the above-described intelligent wheelchair autonomous navigation methods when executed by a processor.
From the above, the method and the device can improve the accuracy of judging the path deviation node by acquiring the image when the wheelchair reaches the observation node, extracting the characteristic points of the wheelchair in the image, analyzing the characteristic points and obtaining the deviation data of the wheelchair. The type of the observation node is judged according to the deviation data, and the wheelchair is controlled to move according to different observation node types by adopting different methods, so that the problem of environmental adaptability can be solved, path deviation and positioning failure caused by environmental change are avoided, the judgment on the environmental change and the event is automatically completed, the safety and the reliability of the wheelchair during autonomous navigation are improved, and the requirements of high precision and high safety of the intelligent wheelchair are met.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of an autonomous navigation method of an intelligent wheelchair according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of extracting feature point sets;
FIG. 3 is a flow chart diagram of a first type of decision observation node;
FIG. 4 is a flow chart of acquiring a spatial projection error;
FIG. 5 is a flow chart of the wheelchair user confirmation;
fig. 6 is a schematic structural diagram of an autonomous navigation device of an intelligent wheelchair according to an embodiment of the present invention;
fig. 7 is a schematic block diagram of an internal structure of an intelligent wheelchair according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made more fully hereinafter with reference to the accompanying drawings, in which embodiments of the invention are shown, it being evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
The intelligent wheelchair is a wheelchair product integrating automatic control and navigation technologies. The robot can realize the functions of autonomous movement, obstacle avoidance, path planning and the like, and greatly improves the mobility autonomy and the quality of life of the disabled. The existing intelligent wheelchair automatic navigation method realizes the autonomous navigation function, but due to the defects of machine vision and SLAM, when the environment is complex or the environment is greatly changed, the path deviation and the direction deviation can be caused, and the safety and the reliability of autonomous navigation are affected. Therefore, the existing wheelchair cannot adapt to environmental changes during autonomous navigation, and cannot meet the requirements of high precision and high safety of the intelligent wheelchair.
Aiming at the problems that the existing intelligent wheelchair automatic navigation method cannot adapt to environmental changes and deviates, the invention provides the intelligent wheelchair automatic navigation method, which is characterized in that observation nodes are arranged on a planned navigation path, the node images when the wheelchair reaches the observation nodes are analyzed, the degree of the wheelchair deviating from the navigation path is judged, and different control measures are adopted for different deviation degrees. That is, the deviation correction can be performed according to the environmental change, so that the autonomous navigation can adapt to the environmental change, and the safety and reliability of the autonomous navigation can be improved.
Method embodiment
The embodiment of the invention provides an intelligent wheelchair autonomous navigation method which is deployed on a control chip of a wheelchair and is used for realizing intelligent control of the wheelchair in a hospital environment. Although the present embodiment is described by taking a hospital environment as an example, the present embodiment is not limited to this application, and may be applied to other space applications.
As shown in fig. 1, the present embodiment specifically includes the following steps:
step S100: planning a navigation path to a destination based on the space environment of the wheelchair, and setting a plurality of observation nodes on the navigation path to control the wheelchair to move along the navigation path;
the space environment in this embodiment refers to the indoor space of a hospital. After a wheelchair user selects a destination, planning an optimal navigation path to the destination, setting a plurality of observation nodes in the navigation path, and automatically controlling the wheelchair to move along the navigation path by adopting SLAM technology.
Specifically, the indoor 3D environment information of the hospital is obtained by adopting an RGB-D camera in advance, an accurate indoor SLAM map is built according to the 3D environment information by adopting an SLAM technology, the SLAM map mainly comprises a channel, a door, a room layout and the like, then the SLAM map is stored in a memory chip of the intelligent wheelchair, and the SLAM map is updated periodically.
When the wheelchair user selects a destination, factors such as path width, turning radius, gradient and the like are considered, all obstacles and narrow channels are avoided, an optimal navigation path of the wheelchair, which is feasible and reaches the destination, is planned on the SLAM map, a plurality of observation nodes are arranged on the navigation path, and whether the wheelchair deviates or not is judged at the positions of the observation nodes, wherein the deviation comprises path deviation and orientation deviation.
The wheelchair is then driven to move automatically along the navigation path, e.g., smoothly at a speed of about 0.5 meters per second. When the wheelchair moves, the direction can be continuously finely adjusted according to the video stream collected by the wheelchair, so that severe steering or braking is avoided.
For indoor environment, in order to enable the wheelchair to navigate and avoid the obstacle more accurately during automatic navigation, in one example, a plurality of fixed points are also uniformly arranged on the navigation path, for example, the interval between every two fixed points is 3-5 meters. When the wheelchair moves to each fixed point, the surrounding environment of the fixed point is monitored by using a camera, and when an obstacle or a person is detected to appear on the navigation path in front of the fixed point, the navigation path in front of the fixed point is re-planned. And, when the wheelchair moves, calculating the pose of the current position of the wheelchair relative to the fixed point according to the video data collected by the wheelchair, and performing Quaternion positioning (Quaternion is a rotation and direction representation method in three-dimensional space). The video stream around the fixed point is collected through a camera on the wheelchair, the relative rotation and translation of the wheelchair from the last fixed point to the current fixed point are calculated, and the relative rotation and translation is expressed as a quaternion. And combining the quaternion with fixed point coordinates on the map to calculate the absolute pose of the wheelchair, so as to realize indoor accurate positioning. By using the positioning method, the wheelchair can continuously estimate the three-dimensional coordinates of the wheelchair in the indoor space and compare the three-dimensional coordinates with the SLAM map so as to realize automatic navigation and obstacle avoidance. The visual odometer is adopted for positioning, does not need to rely on GPS or other external signals, and is particularly suitable for automatic navigation in indoor environment.
When the wheelchair reaches the observation node, the following steps S200 to S400 are performed to detect a deviation on the observation node and make a deviation correction.
Step S200: acquiring a node image, and extracting a characteristic point set of a wheelchair in the node image;
the node image is an image taken when the wheelchair reaches the observation node, and the wheelchair image needs to be included in the node image.
When the node image is shot, a camera with high resolution and wide dynamic range is selected, so that the high-quality node image is obtained when the node is observed. According to the algorithm requirements of SLAM, image recognition and the like, equipment with a dynamic range of not less than 120dB is generally selected, wherein the dynamic range of the equipment is more than 800 ten thousand pixels. And selecting a proper camera module and a proper sensor according to an image processing algorithm and application scene requirements. The color image provides rich information, and is suitable for feature extraction and matching; infrared images are applied at night; the depth image is used for spatial information extraction and the like. The partial algorithm needs the combination of multiple mode images, selects a module supporting the integration of multiple sensors, selects equipment and interfaces (such as MIPI) supporting high frame rate, acquires more image data information, and is beneficial to image processing under the dynamic change of environment. The application frame rate with higher real-time performance is not lower than 30fps. The sensor with high image definition and noise reference is selected, so that the accuracy of subsequent image analysis and processing is improved. A device with a pixel size of not less than 1.4 μm and a signal to noise ratio of higher than 40dB is generally selected. The appropriate image file format is selected according to storage and transmission requirements. The lossless PNG format stores the original data information; the JPEG format can reduce the file size to a certain extent, and is suitable for storage and transmission. The high quality factor is selected during image encoding to prevent blocking from affecting the image processing quality. And introducing a corresponding calibration algorithm in the image acquisition and coding links, providing software compensation for various errors (such as cone residual errors) of the sensor, and improving the subsequent image processing precision.
When the data of multiple sensors needs to be fused and processed, a high-precision time synchronization device or algorithm needs to be selected to acquire the time of arrival of the wheelchair at the observation node. If a standard NTP network time server is selected as a time reference, the absolute accuracy calibration of the system time is realized through the network. The standard NTP server can provide UTC time and high-precision time synchronization functions; a high-precision temperature control crystal oscillator chip (such as a TCXO) is selected, the working temperature range covers the environmental temperature change range of a hospital, the clock precision is up to +/-0.1 ppm, millisecond-level timing is realized, and the system time is synchronized to prevent drift. When the wheelchair enters the observation node, the accurate time information of entering the observation node is obtained by reading the counting register of the TCXO driving chip. The time information is in one-to-one correspondence with the image data and is used as the basis of image analysis. Meanwhile, a high-precision time stamp is generated, and the time point of the wheelchair entering the observation node is marked with microsecond precision. The time stamp information corresponds to the image data and provides a reference basis for image analysis and result verification. Finally, an image processing chip and an algorithm can be selected, and the time point of the wheelchair entering the observation node is calculated through analysis of the video stream. The time information corresponding to the time point is used as an auxiliary reference for image data fusion, so that the calculation accuracy can be improved.
After the node image is acquired, firstly, the node image of the current observation node is identified by using a trained wheelchair target detection model, a prediction result of the wheelchair is obtained, the prediction result comprises whether the wheelchair and boundary frame information of the wheelchair are detected, pixel coordinates of a boundary frame center point are extracted from the boundary frame information, and the boundary frame center point represents the spatial position of the detected wheelchair in the current image.
The wheelchair target detection model is a deep learning image recognition algorithm, so that the precise analysis and understanding of the environment image can be realized, the wheelchair target is extracted, the error caused by manual judgment is avoided, and the judgment accuracy is remarkably improved.
The wheelchair target detection model of this embodiment is a YOLOv3 network. The wheelchair target detection model needs to be trained before use. A large amount of wheelchair image data is first collected, including wheelchair images in various environments, wherein a proportion of the images contain a clear wheelchair target, and these images are formed into a wheelchair image dataset as a model training sample. The YOLOv3 algorithm is selected as an underlying network structure, and the network comprises modules such as a convolution layer, a pooling layer, a full connection layer and the like. And adding a classification layer with the class number of 1 at the network output end for detecting whether wheelchair targets exist in the pictures. Meanwhile, 4 bounding box regression layers are added for predicting the bounding box coordinates of the wheelchair target. And training the YOLOv3 network by using the collected wheelchair image data set, and continuously adjusting the network weight in the training process so that the network can accurately detect the wheelchair target in the image and give an accurate boundary frame. And after training is finished, obtaining a trained wheelchair target detection model. The trained wheelchair target detection model is then evaluated using the retained test image dataset. And calculating the Average Precision (AP) and false detection rate (FA) indexes of the wheelchair target detection model on the test data set, so as to ensure that the wheelchair target detection model achieves the expected detection effect.
After the target frame of the wheelchair is obtained through the wheelchair target detection model, the characteristic point set of the wheelchair is extracted from the target frame of the wheelchair. Because SIFT (Scale Invariant Feature Transform: scale-invariant feature transform) feature points comprise information such as directions, scales, positions and the like, the SIFT feature points have strong adaptability to rotation, scaling, brightness change and the like of pictures, and are very suitable for image matching and target recognition. Therefore, the feature points in the present embodiment are SIFT feature points, and the orientation deviation and the path deviation of the wheelchair can be determined by the number and distribution of SIFT feature points.
The SIFT feature points can be extracted by feature point extraction algorithms such as SIFT algorithm or AKAZE algorithm. The SIFT algorithm can extract unique and easily-identified key point features in the image, and provide information such as directions, scales, positions and the like for each feature point. However, the SIFT algorithm has a large calculation amount, and cannot meet the requirement of real-time application. AKAZE (accepted-KAZE) algorithm can be remarkably Accelerated by introducing a rapid nonlinear scale space and a new matching technology, so that a real-time and high-performance characteristic point extraction method is obtained. The AKAZE algorithm not only reserves the multiscale and rotation invariance of the KAZE algorithm, but also has higher recognition precision and matching speed, and is also suitable for real-time image processing and matching.
In order to improve the efficiency of feature point extraction and improve the quality of the extracted feature points, in this embodiment, as shown in fig. 2, the steps of extracting the feature point set specifically include:
step S210: extracting feature points in the boundary frame by adopting a scale-invariant feature transformation algorithm to obtain a central feature point set;
step S220: and searching nearby feature points by using the AKAZE algorithm with the feature points in the center feature point set as the center to obtain a feature point set.
First, key feature points are extracted from a target frame image of the wheelchair by using a SIFT algorithm, and information (including feature point positions, directions, sizes and the like) of the key feature points is obtained. And then using an AKAZE algorithm to search more characteristic points near the characteristic points extracted by the SIFT by taking the characteristic points extracted by the SIFT as the center, and describing to form a characteristic point set of the wheelchair target frame.
The key feature points are extracted through the SIFT algorithm, a center feature point set is obtained, the near feature points of the center feature point set are expanded through the AKAZE algorithm, the advantages of the SIFT algorithm and the AKAZE algorithm are combined, and the number and distribution of the feature points of the obtained feature point set can reflect the orientation and offset information of the wheelchair target more accurately. For example: if the number of the feature points is large and the feature points are intensively distributed in the center of the image, the wheelchair is indicated to be stable in orientation; if the number of feature points is small or intensively distributed on one side of the image, the wheelchair direction or position is indicated to be deviated. Therefore, the degree of the deviation of the direction of the wheelchair at the current observation node and the deviation of the path can be effectively judged by analyzing the characteristic point set, and a basis is provided for the follow-up deviation correction.
Step S300: obtaining deviation data of the wheelchair according to the characteristic point set, and judging a first type of the observation node according to the deviation data, wherein the deviation data are used for measuring the deviation degree of the wheelchair; the first type includes: normal node, deviation-rectifying node, node needing reversing and node needing rotating;
the deviation data of the wheelchair is used for measuring the deviation degree of the wheelchair, and the deviation degree comprises the path deviation degree and the orientation deviation degree. The deviation data may be a degree of feature point matching and a degree of feature point distribution variation between node images of adjacent observation nodes, a spatial projection error obtained from a statistical result of feature point projection errors, and the like. Wherein, the low matching degree of the characteristic points indicates that the wheelchair rotates; the wheelchair deviation path is indicated by high matching degree of the characteristic points and large distribution change of the characteristic points; the wheelchair has the advantages of high matching degree of the characteristic points, small distribution change degree of the characteristic points and small space projection error, and the wheelchair does not deviate from the path. The specific content of the offset data is not limited to the above data item, but may be other types of data, and may be one item of data or any combination of the above data items.
Specifically, an AKAZE algorithm is adopted to extract characteristic points in a wheelchair target frame from each pair of adjacent node images (the node image of the current observation node and the node image of the previous observation node) so as to obtain two sets of characteristic points. And (3) selecting a FLANN feature point matching algorithm to match the two sets of feature point sets (the FLANN feature point matching algorithm is a quick approximate nearest neighbor searching algorithm which can quickly find nearest neighbors in a larger data set and is suitable for real-time feature point matching), and calculating the ratio of the logarithm of the matched feature points to the total feature points to obtain the feature point matching degree. For example: feature point set 1 contains 100 feature points and feature point set 2 contains 110 feature points. And matching the two sets of characteristic points by using a FLANN algorithm, and successfully matching 85 pairs of characteristic points. The feature point matching degree is: matching feature point logarithm/feature point total = 85/210 = 0.405. The higher matching degree of the characteristic points indicates that the two groups of characteristic points have stronger corresponding relation, and the wheelchair state is more stable. If the matching degree of the feature points is low, the fact that part of feature points cannot find correct matching is indicated, and the wheelchair state may change.
And (3) for the successfully matched characteristic points, analyzing the spatial distribution of the successfully matched characteristic points in the two characteristic point sets, and comparing the difference of the spatial distribution in the two characteristic point sets to obtain the distribution change degree of the characteristic points. If the distribution of the characteristic points changes to a large extent, the position or the posture of the wheelchair is indicated to deviate from a preset path. If the distribution of the characteristic points is basically consistent, the distribution change degree of the characteristic points is small, and the wheelchair is confirmed to be in a normal state.
And comprehensively analyzing the degree of matching of the feature points and the degree of distribution change of the feature points, so that the first type of the observation node can be judged. The first type mainly comprises: normal node, correctable node, nodes needing reversing and nodes needing rotating. For example: when the matching degree of the feature points is high and the distribution change degree of the feature points is small, the wheelchair state on the surface is stable, no path deviation occurs, and the first type is a normal node; when the matching degree of the feature points is higher and the distribution change degree of the feature points is common, the slight path deviation is shown, and the first type is a correctable node; when the characteristic point matching is low and the characteristic point distribution change degree is large, indicating that the wheelchair has path deviation, wherein the first type is a node needing to be rotated; when the characteristic point matching is low and the characteristic point distribution change degree is inverted (or-1), the surface wheelchair moves reversely, and the first type is a node needing to be reversed. In the specific judgment, a threshold value of the matching degree of the characteristic points and a threshold value of the distribution change degree of the characteristic points corresponding to each type can be preset, and the first type can be judged by comparing the threshold values with the threshold values and the threshold values of the distribution change degree of the characteristic points with high matching degree of the characteristic points.
In order to be more suitable for the application scenario of the embodiment, the accuracy of determining the wheelchair deviation is improved, and the embodiment comprehensively considers the feature point matching degree, the feature point distribution change degree and the space projection error to determine the first type of the observation node, wherein specific steps for determining the first type of the observation node in the embodiment are shown in fig. 3, and the method comprises the following steps:
Step S310: acquiring a characteristic point set of a previous observation node, and acquiring a characteristic point matching degree and a characteristic point distribution change degree according to the characteristic point set of the current observation node and the characteristic point set of the previous observation node;
the characteristic point set of the previous observation node can be read from the memory of the control chip, and the characteristic point set of the current observation node and the characteristic point set of the previous observation node are compared to obtain the characteristic point matching degree and the characteristic point distribution change degree. The specific content can be found in the foregoing, and will not be described in detail herein.
Step S320: based on the three-dimensional coordinates of the feature points, projecting the feature points in the feature point set to a three-dimensional model of a pre-constructed space environment, calculating the projection error of each feature point, and counting all the projection errors to obtain a space projection error;
when the SLAM map is constructed, a three-dimensional model of the space environment is constructed synchronously. And obtaining projection points of the characteristic points projected into the three-dimensional model according to the three-dimensional coordinates of the characteristic points. The three-dimensional model of the space environment is also composed of a point set, and the projection points do not necessarily belong to the point set, so that the matching points of the projection points in the point set can be searched, and the distance between the projection points and the matching points is calculated to be used as the projection error of the feature points. And calculating the projection errors of all the characteristic points to obtain the space projection errors.
In this embodiment, the adjacent points are used as matching points of the projection points in the three-dimensional model of the spatial environment, so as to obtain the spatial projection error, and the specific steps are as shown in fig. 4, including:
step S321: based on the three-dimensional coordinates of the feature points, projecting the feature points in the feature point set onto a three-dimensional model to obtain a projection point set;
a three-dimensional model of the saved space environment is acquired, wherein the three-dimensional model comprises three-dimensional coordinates of a preset wheelchair path point. And calculating three-dimensional coordinates of the feature points by using an EPnP algorithm, and respectively projecting each projection point onto a three-dimensional model of the space environment to obtain a projection point set. The EPnP algorithm is a high-precision PnP (Perselect-n-Point) algorithm, and utilizes epipolar geometry and rotation characterization to reconstruct three-dimensional characteristic points of a two-dimensional image, so that the precision of sub-pixel level can be achieved.
Step S322: searching the nearest point of the projection points in the projection point set on the three-dimensional model;
step S323: calculating the distance between each projection point in the projection point set and the corresponding nearest point to obtain the projection error of the feature point;
step S324: and counting the projection errors of all the characteristic points to obtain a space projection error.
And projecting the three-dimensional coordinates of the feature points obtained through calculation onto a three-dimensional model, and searching the closest point closest to the projection point on the three-dimensional model after obtaining the projection point. And calculating Euclidean distance as the projection error of the feature points according to the three-dimensional coordinates of the feature points and the three-dimensional coordinates of the nearest points, and counting the projection errors of all the feature points to obtain the space projection error. The spatial projection errors may be offset to the extent of the overall position or orientation of the wheelchair. For example: and counting the average value and the maximum value of all the characteristic points. The average value is smaller but the maximum value is larger, which indicates that the projection error of the individual characteristic points is larger, and the wheelchair posture can be changed greatly. The average value being greater at the same time as the maximum value indicates that the wheelchair overall position or orientation has deviated significantly from the intended path. Therefore, the state of the wheelchair can be reflected according to the statistics of the projection errors, and in the embodiment, when the average value of the projection errors is less than 0.2 meter and the maximum value is less than 0.3 meter, the state of the wheelchair is judged to be normal, and the automatic navigation is continued. When the average value of the projection errors is less than 0.2 m and the maximum value is more than 0.3 m, judging that the wheelchair gesture is possibly changed, stopping the wheelchair from moving, and judging the wheelchair state again after the environment is stable; when the average value of the projection errors is more than 0.2 meter, the wheelchair position is judged to be larger than the deviation of the planned path, the wheelchair is stopped, and the path is re-planned.
Step S330: obtaining a first type of observation node according to the feature point matching degree, the feature point distribution change degree and the space projection error;
and comprehensively analyzing the matching degree of the characteristic points, the distribution change degree of the characteristic points and the space projection error, and judging the first type of the observation node according to a set judging rule. The content of the determination rule is not limited, and can be set accordingly according to the scene, the accuracy of the sensor, the determination accuracy requirement, and the like.
From the above, it is known that one type of deviation data or a combination of deviation data can be used to determine a first type of observation node, such as: in the embodiment, the first type of the observation node is judged by comprehensively adopting the comprehensive feature point matching degree, the feature point distribution change degree and the space projection error; the first type of the observation node may be determined based on the degree of matching of the feature points and the degree of change in the distribution of the feature points, or may be determined based on only the spatial projection error.
Step S400: when the first type is a normal node, continuing to control the wheelchair to move along the navigation path; when the first type is a node capable of correcting the position, correcting the wheelchair and then continuously controlling the wheelchair to move along the navigation path; and when the first type is a node needing reversing or a node needing rotating, acquiring a control measure corresponding to the first type, and adopting the control measure to control the wheelchair to move.
The normal node indicates that the wheelchair is not detected to deviate at the current observation node, and the wheelchair can be automatically controlled to move forward to the next observation node; the deviation-rectifying node indicates that the wheelchair is detected to slightly deviate at the current observation node, and the track and the direction of the wheelchair can be rectified on the current navigation path and then the wheelchair can be continuously controlled to move; the node needing to be backed up indicates that the wheelchair is seriously deviated from a preset path detected by the current observation node, and the path is required to be backed up for a certain distance to be re-planned; the required rotation node indicates that the direction of the wheelchair is detected to be greatly deviated at the current observation node, and the direction of the wheelchair is required to be adjusted.
In summary, in this embodiment, by setting the observation node on the navigation path, extracting the feature point of the node image captured at the position of the observation node, analyzing the matching degree of the feature point of the observation node before and after, the change in the spatial distribution degree, and the statistical result of the projection error, the deviation degree of the current observation node is comprehensively determined, and different control measures are adopted for different deviation degrees. The wheelchair can adapt to environmental changes to correct deviation during autonomous navigation, the adaptability of autonomous navigation is good, and the safety of autonomous navigation is improved.
When the deviation degree of the wheelchair is slight, if the node can be corrected, only small correction is needed, and the influence on autonomous navigation of the wheelchair is not great; when the deviation degree of the wheelchair is serious, the adopted control measures have great influence on the autonomous navigation of the wheelchair, if the wheelchair is required to retreat for a certain distance by a backward node; the rotation of the nodes is required to adjust the direction of the wheelchair. To ensure autonomous navigation safety against accidents, in one embodiment, when the first type is a reverse-needed node or a rotation-needed node, a confirmation link for the wheelchair user is added when the wheelchair deviation is corrected according to the first type. The specific steps are shown in fig. 5, including:
step S410: acquiring control measures corresponding to the first type;
the specific reference may be made to the description in step S400, and the details are not repeated here.
Step S420: displaying the node image to a wheelchair user, and acquiring an electroencephalogram signal of the wheelchair user;
the node image is displayed to a wheelchair user through the display equipment of the wheelchair, and the electroencephalogram response signal generated after the wheelchair user views the node image is acquired by using the electroencephalogram signal detection equipment, so that the control intention of the wheelchair user after the wheelchair user views the node image can be obtained through the electroencephalogram response signal.
Step S430: inputting the electroencephalogram signals into a trained network model to obtain control intention, and judging a second type of the observation node according to the control intention;
and analyzing and classifying the electroencephalogram response signals by adopting a trained network model. Judging the control intention of the wheelchair user after the node image is seen through the network model, for example, controlling the intention of the wheelchair to reverse or rotate, and outputting a corresponding classification result.
The network model in this embodiment is a convolutional neural model, and the process of training the convolutional neural model is as follows: firstly, training data are collected, and electroencephalogram signals and corresponding control intention labels of N testees when watching M node images at T time points are recorded. Obtaining training set. xnt is an electroencephalogram signal of the subject n at time t, and ynt is a control intention label. The network structure of the convolutional neural network comprises a plurality of convolutional layers, an activating layer and a pooling layer. The convolution layer may be expressed as: />Wherein i represents a layer sequence number; the active layer may be expressed as: />The method comprises the steps of carrying out a first treatment on the surface of the The pooling layer may be expressed as: />And finally, obtaining a judgment result of the control intention through the linear projection layer and the full connection layer. The gradient descent algorithm of the convolutional neural network is Adam, and cross entropy loss is calculated. After training, the acquired electroencephalogram signals xnt are input to obtain a judgment result of the control intention. By training a convolutional neural network model by utilizing a large amount of data, the accurate judgment of the brain electrical signals is realized.
Step S440: when the first type and the second type are the same, adopting control measures to control the wheelchair to move; otherwise, when the confirmation instruction of the control measure is obtained, the wheelchair user agrees to execute the control measure, the control measure is adopted to control the wheelchair to move, and when the confirmation instruction is not obtained, the wheelchair is controlled to suspend moving.
Comparing the first type (i.e. image classification result) with the second type (i.e. electroencephalogram intention judgment result), if the first type and the second type are consistent, confirming that the current node belongs to the node needing reversing or the node needing rotating, judging that the current node is correct, and adopting control measures to control the wheelchair to move, such as starting corresponding reversing or direction selection control steps. If the first type is inconsistent with the second type, or the electroencephalogram signal classification model does not give an explicit classification result, the current environment and the node situation are described in detail for the wheelchair user through voice prompt and node image display, then the wheelchair user is required to confirm whether to execute the control measures to be taken, the wheelchair user adopts the control measures to control the wheelchair to move after agreeing, otherwise, the wheelchair is controlled to stop moving.
Because part of hands and feet are inconvenient, a user cannot confirm whether to execute the control measures to be adopted or not through physical operations (such as key pressing, screen clicking and the like), effective man-machine interaction cannot be realized, and the use scene is greatly limited. Thus, in one example, a wheelchair user may express a confirmation intention through a certain eye movement signal, in a manner that may avoid physical manipulation by the wheelchair user. If the wheelchair is provided with a sensing device capable of detecting corresponding clicking actions, the wheelchair user can express the confirmation intention through simple clicking actions (such as blinking, smiling and the like) after voice prompt. If the user does not respond to the voice prompt due to fatigue or distraction, the "nodes to be confirmed" state will be maintained by default and the motion paused. After the wheelchair user responds, the confirmation and control are continued, so that misoperation can be avoided. For example: for the situation that the user has no way to make effective response, an emergency stop switch can be arranged, and the emergency stop switch is manually confirmed by a nursing staff after a certain time, and the wheelchair is suspended at the moment and waits for the confirmation of the nursing staff. Man-machine interaction is more efficient and can be applied in more scenes.
From the above, the control measures are verified and confirmed through the electroencephalogram mode, so that the misjudgment problem caused by single dependence is avoided, and the robustness of the system is enhanced. Even if the image recognition generates misjudgment, the electroencephalogram judgment can be corrected, and the fault tolerance of the system is improved. The multi-mode judgment of the same event is realized, and more accurate and reliable judgment results can be obtained.
During confirmation, the electroencephalogram signals of the wheelchair user after seeing the node images need to be analyzed, so that the accuracy of the image classification results can be confirmed again. However, it takes a certain time to analyze the electroencephalogram signal, and some delay is generated in this process from the detection of the signal to the final classification result generation. If the delay is too long, real-time control and response of the wheelchair may be affected. In order to control the delay within the acceptable range of wheelchair control, the final judgment time can be controlled within 300ms through algorithm and model optimization, so that the requirement on real-time control response is met, and the accuracy and safety of key node judgment are obviously improved.
Specifically, on the basis of obtaining the image classification result of the current node, the electroencephalogram signal of the user is further acquired and analyzed, which generates a certain calculation delay. The time delay is from the steps of signal detection, preprocessing, feature extraction, model classification and the like. To control this delay to within an acceptable range, high performance signal detection devices and GPUs may be employed to accelerate. The algorithm selection of the electroencephalogram signal processing should be more efficient, and the model scale is controlled in a medium range. The sampling rate of the signal detection equipment is set above 200Hz, and the signals of the brain area related to the temporal inner movement or the brain area with the equivalent structure are detected, and the signals of the area are closely related to the wheelchair movement control, so that the distinguishing performance is realized. A higher sampling rate may result in a richer signal characteristic. Selecting 3-4 frequency bands related to navigation control for filtering during feature extraction, e.g Frequency band (8-13 Hz), a->Frequency band (13-30 Hz), a->Frequency band (30-90 Hz), etc. This may reduce the subsequent throughput and latency. The network structure and the compression parameter matrix are simplified through pruning, so that the judgment precision close to the initial model is realizedBut faster operating speeds and less computational resource requirements.
Firstly training a convolutional neural model to obtain initial judgment accuracy and parameter statistics, and obtaining a basic model if the total number of parameters is m and the judgment accuracy is acc_0; and pruning and compressing the basic model to obtain the trained network model. Specifically, the contribution degree of each parameter to the judgment result is calculated through a principal component analysis method, and p% of parameters with smaller contribution degree (such as p=30%) are pruned, so that a pruning model is obtained. And judging by using a pruning model to obtain the precision acc_1. If the difference between acc_1 and acc_0 is within an acceptable range (e.g., within 5%), using the pruning model as a candidate model; otherwise, the pruning proportion p is adjusted (increased or decreased), and the pruning model is updated. And (3) compressing each full-connection layer parameter matrix in the candidate model to q% (such as q=50%) of the original matrix by singular value decomposition to obtain a compression model, and judging by using the compression model to obtain the precision acc_2. If the difference between acc_2 and acc_0 is within an acceptable range (within 5%), the compression model is the final model, otherwise, the compression ratio q is properly adjusted, and the compression model is updated. After pruning treatment and compression treatment, the parameter quantity of the final model is greatly reduced (for example, 30-50%), but the judgment precision loss is small (within 5%), so that the requirements can be met. By combining pruning and matrix optimization, light weight and accuracy balance are realized. Thus, the secondary verification process delay can be controlled to 150-250ms. And the total delay is about 300ms after 50ms generated by the image classification result, so that the requirement on real-time control response is met. This delay may be sufficient for real-time control, but may also need to be adjusted for specific user situations. For some users with poor physical conditions, a delay of 350ms is also acceptable.
In summary, by acquiring the time and the image when the wheelchair arrives at the observation node and analyzing the image at the time, the accuracy of determining the path deviation node can be improved. Judging the type of the observation node, controlling the wheelchair to move by adopting different methods according to different types of the observation node, solving the problem of environmental adaptability, avoiding path deviation and positioning failure caused by environmental change, automatically completing the judgment of environmental change and event, improving the safety and reliability of the system, and meeting the requirements of high precision and high safety of the intelligent wheelchair.
Exemplary System
Corresponding to the above-mentioned intelligent wheelchair autonomous navigation method, the embodiment of the invention also provides an intelligent wheelchair autonomous navigation device, as shown in fig. 6, the above-mentioned device includes:
the autonomous control module 600 is used for planning a navigation path to a destination based on the space environment of the wheelchair, and controlling the wheelchair to move along the navigation path;
an observation node module 610, configured to set a plurality of observation nodes on the navigation path;
the feature point set module 620 is configured to acquire a node image, and extract a feature point set of a wheelchair in the node image;
an observation node type module 630, configured to obtain deviation data of the wheelchair according to the feature point set, and determine a first type of the observation node according to the deviation data, where the deviation data is used to measure a degree of deviation of the wheelchair, and the first type includes: normal node, deviation-rectifying node, node needing reversing and node needing rotating;
A bias correction module 640 for continuing to control wheelchair movement along the navigation path when the first type is a normal node; when the first type is a correctable node, correcting the wheelchair and then continuously controlling the wheelchair to move along the navigation path; and when the first type is a node needing reversing or a node needing rotating, acquiring a control measure corresponding to the first type, and adopting the control measure to control the wheelchair to move.
Specifically, in this embodiment, specific functions of each module of the intelligent wheelchair autonomous navigation apparatus may refer to corresponding descriptions in the intelligent wheelchair autonomous navigation method, which are not described herein.
Based on the embodiment, the invention further provides an intelligent wheelchair. As shown in fig. 7, the intelligent wheelchair includes a processor, a memory, and a display screen connected through a system bus. Wherein the processor of the intelligent wheelchair is configured to provide computing and control capabilities. The memory of the intelligent wheelchair comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and an intelligent wheelchair autonomous navigation program. The internal memory provides an environment for the operation of the operating system and the intelligent wheelchair autonomous navigation program in the nonvolatile storage medium. The intelligent wheelchair autonomous navigation program is executed by the processor to realize the steps of any one of the intelligent wheelchair autonomous navigation methods. The display screen of the intelligent terminal can be a liquid crystal display screen or an electronic ink display screen.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium is stored with an intelligent wheelchair autonomous navigation program, and the intelligent wheelchair autonomous navigation program realizes the steps of any one of the intelligent wheelchair autonomous navigation methods provided by the embodiment of the invention when being executed by a processor.
It should be understood that the sequence number of each step in the above embodiment does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not be construed as limiting the implementation process of the embodiment of the present invention.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions are not intended to depart from the spirit and scope of the various embodiments of the invention, which are also within the spirit and scope of the invention.

Claims (9)

1. The intelligent wheelchair autonomous navigation method is characterized by comprising the following steps of:
Planning a navigation path to a destination based on the space environment of the wheelchair, and arranging a plurality of observation nodes on the navigation path to control the wheelchair to move along the navigation path;
when the wheelchair reaches the observation node:
acquiring a node image, and extracting a characteristic point set of a wheelchair in the node image;
obtaining deviation data of the wheelchair according to the characteristic point set, and judging a first type of the observation node according to the deviation data, wherein the deviation data are used for measuring the deviation degree of the wheelchair, and the first type comprises the following steps: normal node, deviation-rectifying node, node needing reversing and node needing rotating;
when the first type is a normal node, continuing to control the wheelchair to move along the navigation path; when the first type is a correctable node, correcting the wheelchair and then continuously controlling the wheelchair to move along the navigation path; when the first type is a node needing reversing or a node needing rotating, acquiring control measures corresponding to the first type, and adopting the control measures to control the wheelchair to move;
the obtaining the deviation data of the wheelchair according to the characteristic point set, and judging the first type of the observation node according to the deviation data comprises the following steps:
Acquiring a characteristic point set of a previous observation node;
obtaining a characteristic point matching degree and a characteristic point distribution change degree according to the characteristic point set and the characteristic point set of the previous observation node;
based on the three-dimensional coordinates of the feature points, projecting the feature points in the feature point set to a three-dimensional model of a pre-constructed space environment, calculating the projection error of each feature point, and counting all the projection errors to obtain a space projection error;
and obtaining the first type of the observation node according to the feature point matching degree, the feature point distribution change degree and the space projection error.
2. The intelligent wheelchair autonomous navigation method of claim 1, wherein the extracting the feature point set of the wheelchair in the node image comprises:
detecting a wheelchair in the node image by adopting an image detection model to obtain a boundary frame of the wheelchair;
and extracting the feature point set in the boundary box by adopting a feature point extraction algorithm.
3. The intelligent wheelchair autonomous navigation method of claim 2, wherein the extracting the feature point set within the bounding box using a feature point extraction algorithm comprises:
extracting feature points in the boundary frame by adopting a scale-invariant feature transformation algorithm to obtain a central feature point set;
And searching nearby feature points by using the AKAZE algorithm by taking the feature points in the central feature point set as the center to obtain the feature point set.
4. The intelligent wheelchair autonomous navigation method of claim 1, wherein the projecting feature points in the feature point set to a three-dimensional model of a pre-constructed spatial environment based on three-dimensional coordinates of the feature points, calculating a projection error of each feature point, and counting all the projection errors to obtain a spatial projection error, comprises:
based on the three-dimensional coordinates of the feature points, projecting the feature points in the feature point set onto the three-dimensional model to obtain a projection point set;
searching the nearest point of the projection points in the projection point set on the three-dimensional model;
calculating the distance between each projection point in the projection point set and the corresponding nearest point to obtain the projection error of the feature point;
and counting the projection errors of all the characteristic points to obtain the space projection errors.
5. The intelligent wheelchair autonomous navigation method of claim 1, wherein the obtaining the control measure corresponding to the first type, and the employing the control measure to control wheelchair movement comprises:
Acquiring control measures corresponding to the first type;
displaying the node image to a wheelchair user, and acquiring an electroencephalogram signal of the wheelchair user;
inputting the electroencephalogram signals into a trained network model to obtain control intention, and judging a second type of the observation node according to the control intention;
when the first type and the second type are the same, adopting the control measure to control the wheelchair to move;
otherwise, when the confirmation instruction of the wheelchair user agreeing to execute the control measure is acquired, the control measure is adopted to control the wheelchair to move, and when the confirmation instruction is not acquired, the wheelchair is controlled to pause moving.
6. The intelligent wheelchair autonomous navigation method of claim 5, wherein training the network model comprises:
training the network model to obtain a basic model;
and pruning and compressing the basic model to obtain the trained network model.
7. Intelligent wheelchair autonomous navigation device, its characterized in that includes:
the autonomous control module is used for planning a navigation path to a destination based on the space environment of the wheelchair, and controlling the wheelchair to move along the navigation path;
The observation node module is used for setting a plurality of observation nodes on the navigation path;
the characteristic point set module is used for acquiring a node image and extracting a characteristic point set of the wheelchair in the node image;
the observation node type module is used for obtaining deviation data of the wheelchair according to the characteristic point set, judging a first type of the observation node according to the deviation data, wherein the deviation data are used for measuring the deviation degree of the wheelchair, and the first type comprises the following steps: normal node, deviation-rectifying node, node needing reversing and node needing rotating;
the deviation correction module is used for continuously controlling the wheelchair to move along the navigation path when the first type is a normal node; when the first type is a correctable node, correcting the wheelchair and then continuously controlling the wheelchair to move along the navigation path; when the first type is a node needing reversing or a node needing rotating, acquiring control measures corresponding to the first type, and adopting the control measures to control the wheelchair to move;
the obtaining the deviation data of the wheelchair according to the characteristic point set, and judging the first type of the observation node according to the deviation data comprises the following steps:
acquiring a characteristic point set of a previous observation node;
Obtaining a characteristic point matching degree and a characteristic point distribution change degree according to the characteristic point set and the characteristic point set of the previous observation node;
based on the three-dimensional coordinates of the feature points, projecting the feature points in the feature point set to a three-dimensional model of a pre-constructed space environment, calculating the projection error of each feature point, and counting all the projection errors to obtain a space projection error;
and obtaining the first type of the observation node according to the feature point matching degree, the feature point distribution change degree and the space projection error.
8. An intelligent wheelchair, characterized in that it comprises a memory, a processor and an intelligent wheelchair autonomous navigation program stored on the memory and operable on the processor, which when executed by the processor implements the steps of the intelligent wheelchair autonomous navigation method according to any of claims 1-6.
9. A computer readable storage medium, characterized in that the computer readable storage medium has stored thereon an intelligent wheelchair autonomous navigation program, which when executed by a processor, implements the steps of the intelligent wheelchair autonomous navigation method of any of claims 1-6.
CN202310949830.8A 2023-07-31 2023-07-31 Autonomous navigation method, device, terminal and medium for intelligent wheelchair Active CN116659518B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310949830.8A CN116659518B (en) 2023-07-31 2023-07-31 Autonomous navigation method, device, terminal and medium for intelligent wheelchair

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310949830.8A CN116659518B (en) 2023-07-31 2023-07-31 Autonomous navigation method, device, terminal and medium for intelligent wheelchair

Publications (2)

Publication Number Publication Date
CN116659518A CN116659518A (en) 2023-08-29
CN116659518B true CN116659518B (en) 2023-09-29

Family

ID=87720996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310949830.8A Active CN116659518B (en) 2023-07-31 2023-07-31 Autonomous navigation method, device, terminal and medium for intelligent wheelchair

Country Status (1)

Country Link
CN (1) CN116659518B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117075618B (en) * 2023-10-12 2024-01-05 小舟科技有限公司 Wheelchair automatic control method, device, terminal and medium based on anomaly monitoring

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888584A (en) * 2021-08-04 2022-01-04 北京化工大学 Robot wheelchair tracking system based on omnibearing vision and control method
DE102020131845A1 (en) * 2020-12-01 2022-06-02 Munevo Gmbh Device and method for navigating and/or guiding a vehicle and vehicle
CN115120250A (en) * 2022-06-27 2022-09-30 重庆科技学院 Intelligent brain-controlled wheelchair system based on electroencephalogram signals and SLAM control
WO2022247325A1 (en) * 2021-05-25 2022-12-01 深圳市优必选科技股份有限公司 Navigation method for walking-aid robot, and walking-aid robot and computer-readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1972486A1 (en) * 2007-03-19 2008-09-24 Invacare International Sàrl Motorized wheelchair
US8315770B2 (en) * 2007-11-19 2012-11-20 Invacare Corporation Motorized wheelchair

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102020131845A1 (en) * 2020-12-01 2022-06-02 Munevo Gmbh Device and method for navigating and/or guiding a vehicle and vehicle
WO2022247325A1 (en) * 2021-05-25 2022-12-01 深圳市优必选科技股份有限公司 Navigation method for walking-aid robot, and walking-aid robot and computer-readable storage medium
CN113888584A (en) * 2021-08-04 2022-01-04 北京化工大学 Robot wheelchair tracking system based on omnibearing vision and control method
CN115120250A (en) * 2022-06-27 2022-09-30 重庆科技学院 Intelligent brain-controlled wheelchair system based on electroencephalogram signals and SLAM control

Also Published As

Publication number Publication date
CN116659518A (en) 2023-08-29

Similar Documents

Publication Publication Date Title
CN112567201B (en) Distance measuring method and device
CN110400352B (en) Camera calibration with feature recognition
WO2017167282A1 (en) Target tracking method, electronic device, and computer storage medium
CN107665506B (en) Method and system for realizing augmented reality
CN113168541A (en) Deep learning inference system and method for imaging system
KR101912569B1 (en) The object tracking system of video images
CN107665505B (en) Method and device for realizing augmented reality based on plane detection
CN109934108B (en) Multi-target and multi-type vehicle detection and distance measurement system and implementation method
CN116659518B (en) Autonomous navigation method, device, terminal and medium for intelligent wheelchair
EP4137901A1 (en) Deep-learning-based real-time process monitoring system, and method therefor
US20220180534A1 (en) Pedestrian tracking method, computing device, pedestrian tracking system and storage medium
CN107025661B (en) Method, server, terminal and system for realizing augmented reality
CN107665508B (en) Method and system for realizing augmented reality
CN103677274A (en) Interactive projection method and system based on active vision
CN113313763B (en) Monocular camera pose optimization method and device based on neural network
CN113093726A (en) Target detection and tracking method based on Yolo _ v4 algorithm
US11308348B2 (en) Methods and systems for processing image data
CN104821010A (en) Binocular-vision-based real-time extraction method and system for three-dimensional hand information
CN107665507B (en) Method and device for realizing augmented reality based on plane detection
CN116630394B (en) Multi-mode target object attitude estimation method and system based on three-dimensional modeling constraint
KR101912570B1 (en) The object tracking system using artificial neural networks
Huang Event-based timestamp image encoding network for human action recognition and anticipation
US10789472B1 (en) Multiple image processing and sensor targeting for object detection
Xing et al. Robust event detection based on spatio-temporal latent action unit using skeletal information
CN106406507B (en) Image processing method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant