CN110673719A - VR-based spatial cognitive ability training method and system - Google Patents

VR-based spatial cognitive ability training method and system Download PDF

Info

Publication number
CN110673719A
CN110673719A CN201910775365.4A CN201910775365A CN110673719A CN 110673719 A CN110673719 A CN 110673719A CN 201910775365 A CN201910775365 A CN 201910775365A CN 110673719 A CN110673719 A CN 110673719A
Authority
CN
China
Prior art keywords
training
user
trained
virtual reality
reality scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910775365.4A
Other languages
Chinese (zh)
Inventor
覃文军
林国丛
刘春燕
王玉平
陈超
徐哲学
韩涛
王同亮
杨金柱
栗伟
曹鹏
冯朝路
赵大哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuanwu Hospital
Northeastern University China
Original Assignee
Xuanwu Hospital
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuanwu Hospital, Northeastern University China filed Critical Xuanwu Hospital
Priority to CN201910775365.4A priority Critical patent/CN110673719A/en
Publication of CN110673719A publication Critical patent/CN110673719A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B9/00Simulators for teaching or training purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Abstract

The invention discloses a space cognitive ability training method and a system based on VR equipment, wherein the training method comprises the following steps: a1, receiving a scene trigger instruction, and displaying a virtual reality scene; a2, receiving a training level instruction and determining corresponding path information; a3, receiving a training starting instruction, displaying the current user role in a virtual reality scene, and automatically walking according to path information; a4, when the walking direction needs to be selected in the branch road junction of the path information, sending an operation prompt tone to the user; a5, receiving direction information triggered by a user, judging whether the direction information is correct or not, and if the direction information is incorrect, sending a selection error prompt tone; a6, receiving the direction information triggered again by the user to be trained, and judging whether the direction information is correct or not until the triggered direction information is correct, wherein the current user role displayed in the virtual reality scene automatically walks according to the path information until the current user role reaches the destination; the invention can lead the user to train the spatial cognitive ability in the virtual reality scene.

Description

VR-based spatial cognitive ability training method and system
Technical Field
The invention relates to a space cognitive ability training method and system based on VR.
Background
At present, mild cognitive dysfunction belongs to a chronic degenerative disease of the nervous system, and refers to a cognitive impairment state between normal aging and dementia, which is manifested by reduced logical reasoning ability and spatial cognition ability, and does not reach the standard of Alzheimer's disease. However, patients diagnosed with mild cognitive dysfunction have an extremely high risk of developing alzheimer's disease, with a conversion rate of 6% to 25% per year. With the next declaration of the development test of Alzheimer's disease drugs, there have been no drugs with breakthrough progress in this field for up to a decade. Existing drugs can only delay the progression of the disease. Under such circumstances, it is necessary to improve the spatial cognitive ability and thereby prevent the occurrence of senile dementia.
At present, the means for improving the spatial cognitive ability mainly comprises live-action training of workers, but the users have inconvenient actions, so that the training difficulty is high and the effect is not good.
Disclosure of Invention
Technical problem to be solved
The invention provides a space cognitive ability training method and system based on VR (virtual reality) in order to solve the problem that a user can only perform real-scene training on space cognitive ability.
(II) technical scheme
In order to achieve the above object, the present invention provides a VR-based spatial cognitive ability training method, including:
a1, aiming at a user to be trained, after receiving a scene triggering instruction, displaying a virtual reality scene to the user to be trained;
a2, after receiving a training level instruction of the displayed virtual reality scene, determining path information matched with a training level in the training level instruction in the displayed virtual reality scene;
a3, receiving a training starting instruction triggered by the user to be trained, displaying the current user role in a virtual reality scene, and automatically walking according to the path information;
a4, when the walking direction needs to be selected in the bifurcation junction of the path information, sending an operation prompt tone to the user to be trained;
a5, receiving direction information triggered by a user to be trained, judging whether the direction information is correct or not, and if not, sending a selection error prompt tone to the user to be trained;
and A6, receiving the direction information triggered again by the user to be trained, and judging whether the direction information is correct or not, wherein the current user role displayed in the virtual reality scene automatically walks according to the path information until the current user role reaches the destination in the path information when the direction information triggered by the user to be trained is correct.
Preferably, the step a4 further includes:
and after an operation prompt tone is sent to the user to be trained, displaying a directional arrow to be selected in the path of the displayed virtual reality scene.
Preferably, the step a5 further includes:
after the user to be trained is sent out the selection error prompt tone, the selected error arrow in the direction arrows to be selected is not displayed to the user.
Preferably, the automatically walking according to the path information in the step a3 includes:
and controlling the user role displayed in the virtual reality scene to automatically walk according to the path information by adopting the Navigation component.
Preferably, the user character displayed in the virtual reality scene has rigid body properties and is transparent.
A space cognition ability training system based on VR equipment comprises VR equipment and an operating handle; the operating handle is used for sending an operating signal to VR equipment; a plurality of training scenes for training a user are pre-constructed in VR equipment, and each training scene has at least three training levels; and the VR equipment executes any training method according to the instruction of the operating handle.
Preferably, a plurality of training scenes for training a user are pre-constructed in the VR device, and the training scenes include: urban training scenes, supermarket training scenes, indoor training scenes, old town training scenes and maze training scenes;
wherein each of the training scenarios has: a simple training level, a medium training level, and a difficult training level.
(III) advantageous effects
The invention has the beneficial effects that: the invention provides a space cognitive ability training method and a system based on VR equipment, wherein a training scene for training a user is constructed in the VR equipment in a training system, so that the user can train in a virtual scene; the training method can reduce the live-action training cost in the prior art, is safe, can achieve the training purpose without actually going out of the house of the user, ensures the safety of the user, and improves the training efficiency.
Furthermore, in VR training, when the direction information triggered by the user is correct, the current user role displayed in the virtual scene can automatically walk, so that the training difficulty is reduced, and the training effect is improved.
Drawings
FIG. 1 is a flowchart illustrating the steps of a VR-based spatial cognitive ability training method according to the present invention;
fig. 2 is a schematic diagram of reachable intersections in the path information determined in the first and second embodiments of the present invention;
fig. 3 is a diagram illustrating a structure of reachable intersection data in the determined path information according to an embodiment of the present invention.
[ description of reference ]
D0: a starting point;
d1: a second intersection;
d2: a third intersection;
d3: a terminal point;
d4: a fifth intersection;
d5: a sixth intersection;
d6: a seventh intersection;
d7: and an eighth intersection.
Detailed Description
For the purpose of better explaining the present invention and to facilitate understanding, the present invention will be described in detail by way of specific embodiments with reference to the accompanying drawings.
Example one
Assume that the path information training in fig. 2 is determined in the first embodiment, which is described in detail in conjunction with fig. 1-2.
The spatial cognitive ability training system comprises: a VR device and an operating handle; the operating handle is used for sending an operating signal to VR equipment; a plurality of training scenes for training a user are pre-constructed in VR equipment, and each training scene has at least three training levels; the training method comprises the following steps:
s1, aiming at the user to be trained, after receiving the scene triggering instruction, displaying a virtual reality scene to the user to be trained;
s2, after receiving a training level instruction of the displayed virtual reality scene, determining path information shown in the figure 2 in the displayed virtual reality scene;
s3, receiving a training starting instruction triggered by the user to be trained, displaying a transparent current user role with rigid body attributes in a virtual reality scene, and controlling the user role displayed in the virtual reality scene to automatically walk according to the path information by adopting a Navigation component to move according to the path information shown in FIG. 2;
s4, selecting a walking direction at a starting point D0 of the path information, sending an operation prompt tone to a user to be trained, and displaying direction arrows of a second intersection D1, a fifth intersection D4 and a sixth intersection D5 in the path of the displayed virtual reality scene;
s5, receiving direction information triggered by a user to be trained, judging whether the direction information is the direction of the second intersection D1, if the direction arrow of the fifth intersection D4 or the sixth intersection D5 is selected, sending a selection error prompt tone to the user to be trained, and not displaying the selected error arrow in the direction arrows to be selected to the user;
s6, receiving the direction information triggered again by the user to be trained, and judging whether the direction information is in the direction of a second intersection D1 or not, wherein the current user role displayed in the virtual reality scene automatically walks to the position of the second intersection D1 according to the path information until the direction information triggered by the user to be trained is in the direction of the second intersection D1;
s7, selecting a walking direction at the second intersection D1 of the path information, sending an operation prompt tone to a user to be trained, and displaying direction arrows of a third intersection D2 and a seventh intersection D6 in the path of the displayed virtual reality scene;
s8, receiving direction information triggered by a user to be trained, judging whether the direction information is the direction of the third intersection D2, if the direction arrow of the seventh intersection D6 is selected, sending a selection error prompt tone to the user to be trained, and not displaying the direction arrow of the seventh intersection D6 to the user;
s9, receiving direction information of a third intersection D2 triggered by a user to be trained, and automatically walking to the position of the third intersection D2 by a current user role displayed in the virtual reality scene according to the path information;
and S10, receiving direction information triggered by a user to be trained, judging whether the direction information is the direction of the terminal D3, and if the direction arrow of the terminal D3 is selected, automatically walking the current user role displayed in the virtual reality scene to the position of the terminal D3 according to the path information and finishing training.
Example two
Assume that the path information training in fig. 2 is determined in the second embodiment, which is described in detail in conjunction with fig. 1-3.
S1, aiming at the user to be trained, after receiving the scene triggering instruction, displaying a virtual reality scene to the user to be trained;
s2, after receiving a training level instruction of the displayed virtual reality scene, determining path information shown in the figure 2 in the displayed virtual reality scene;
the path information for fig. 2 includes: a first array for storing all reachable intersection indexes from a starting point D0 to an end point D3 in the current virtual scene, wherein the reachable intersection indexes comprise intersection indexes of a correct route and wrong intersection indexes, the intersection indexes of the correct route are stored in sequence preferentially and numbered in sequence from 0, and the index sequence number of the starting point D0 in the first array in FIG. 2 is 0; the index sequence number of the second intersection D1 in the first array is 1; the index sequence number of the third intersection D2 in the first array is 2; the index sequence number of the end point D3 in the first array is 3; the index sequence number of the fifth intersection D4 in the first array is 4; the index sequence number of the sixth intersection D5 in the first array is 5; the index sequence number of the seventh intersection D6 in the first array is 6; the index sequence number of the eighth intersection D7 in the first array is 7; wherein the storage sequence of the intersection indexes in the first array is as follows: 0. 1, 2, 3, 4, 5, 6 and 7.
A data structure storing the relationship between all reachable intersections from a starting point to an end point in a current virtual scene is shown in fig. 3, wherein the value of the jth line of the ith row is 1, which represents that the intersection with the index of i can directly walk to the intersection with the index of j, so that if a user walks to the intersection i, the intersection which can be selectively walked next step can be obtained by only traversing the ith row of the graph, finding all the rows with the value of 1 is the intersection which can be walked selectively, and regenerating corresponding prompting route arrows according to the intersections for the user to select a next walking route; the intersections of the correct route are preferably put into the array in sequence, so that whether the jth point is the correct intersection in the reachable intersections or not is judged, if j is equal to i +1, the intersection is the correct intersection, and if not, the intersection is the wrong intersection;
the intersection with the index i is represented as the intersection of the position of the user in the current virtual scene; the point with index j is represented as: the intersection of the next position selected by the user at the position of the current virtual scene;
a second array for storing the intersection indexes on the correct route from the second intersection D1 to the terminal D3 in the current virtual scene according to the route sequence; wherein the storage sequence of the intersection indexes in the second group is as follows: 1. 2, 3; and comparing the index of the intersection with the index in the second array every time one intersection is reached, and if the index of the intersection is equal to the index of the last intersection in the second array, indicating that the end point of the training is reached.
S3, receiving a training starting instruction triggered by the user to be trained, displaying a transparent current user role with rigid body attributes in a virtual reality scene, acquiring target attributes in a character control module NavMeshAgent in character control script codes by adopting a Navigation module according to path information shown in FIG. 2, and assigning the target attribute values to next intersection coordinate values to realize that the user automatically walks to the next intersection in the virtual scene;
the Navigation component can automatically create a Navigation grid for the geometric structure of the virtual reality scene terrain and can dynamically calculate the optimal route of any two points on the Navigation grid;
wherein, the user role is mounted with:
the Player component in the SteamVR enables the positions of the VR helmet, the handle and the activity area in the virtual reality scene to move along with the movement of the user's character;
a character control assembly NavMeshAgent with speed, angular velocity and acceleration is set;
s4, selecting a walking direction at a starting point D0 of the path information, sending an operation prompt tone to a user to be trained, and displaying direction arrows of a second intersection D1, a fifth intersection D4 and a sixth intersection D5 in the path of the displayed virtual reality scene;
s5, receiving direction information triggered by a user to be trained, and judging whether the direction information is the direction of a correct route, wherein the index of a D0 intersection is 0 and the index of a second intersection D1 is 1 at the position of a starting point D0 of the current user, and meets the condition that 1 is 0+1, so that the second intersection D1 is the correct direction, if the direction arrow of a fifth intersection D4 or a sixth intersection D5 is selected, a selection error prompt sound is sent to the user to be trained, and the selected error arrow in the direction arrow to be selected is not displayed to the user;
s6, receiving the direction information triggered again by the user to be trained, and judging whether the direction information is in the direction of a second intersection D1 or not, wherein the Navigation component acquires a target attribute in a NavMeshAgent through a character control script code until the direction information triggered by the user to be trained is in the direction of the second intersection D1, and assigns the target attribute value to a coordinate value of the second intersection D1 so that the current user role displayed in the virtual reality scene automatically walks to the position of the second intersection D1 according to the path information;
s7, selecting a walking direction at the second intersection D1 of the path information, sending an operation prompt tone to a user to be trained, and displaying direction arrows of a third intersection D2 and a seventh intersection D6 in the path of the displayed virtual reality scene;
s8, receiving direction information triggered by a user to be trained, judging whether the direction information is the direction of the third intersection D2, if the direction arrow of the seventh intersection D6 is selected, sending a selection error prompt tone to the user to be trained, and not displaying the direction arrow of the seventh intersection D6 to the user;
s9, receiving direction information of a third intersection D2 triggered by a user to be trained, and enabling a Navigation component to automatically walk to the position of the third intersection D2 according to the path information by acquiring a target attribute in a NavMeshAgent of a character control script code and assigning a target attribute value to a coordinate value of the third intersection D2;
and S10, receiving direction information triggered by a user to be trained, judging whether the direction information is the direction of the terminal D3, and if the direction arrow of the terminal D3 is selected, automatically walking the current user role displayed in the virtual reality scene to the position of the terminal D3 according to the path information and finishing training.
The technical principles of the present invention have been described above in connection with specific embodiments, which are intended to explain the principles of the present invention and should not be construed as limiting the scope of the present invention in any way. Based on the explanations herein, those skilled in the art will be able to conceive of other embodiments of the present invention without inventive efforts, which shall fall within the scope of the present invention.

Claims (7)

1. A space cognition ability training method based on VR equipment is characterized in that a space cognition ability training system comprises: a VR device and an operating handle; the operating handle is used for sending an operating signal to VR equipment; a plurality of training scenes for training a user are pre-constructed in VR equipment, and each training scene has at least three training levels; the training method comprises the following steps:
a1, aiming at a user to be trained, after receiving a scene triggering instruction, displaying a virtual reality scene to the user to be trained;
a2, after receiving a training level instruction of the displayed virtual reality scene, determining path information matched with a training level in the training level instruction in the displayed virtual reality scene;
a3, receiving a training starting instruction triggered by the user to be trained, displaying the current user role in a virtual reality scene, and automatically walking according to the path information;
a4, when the walking direction needs to be selected in the bifurcation junction of the path information, sending an operation prompt tone to the user to be trained;
a5, receiving direction information triggered by a user to be trained, judging whether the direction information is correct or not, and if not, sending a selection error prompt tone to the user to be trained;
and A6, receiving the direction information triggered again by the user to be trained, and judging whether the direction information is correct or not, wherein the current user role displayed in the virtual reality scene automatically walks according to the path information until the current user role reaches the destination in the path information when the direction information triggered by the user to be trained is correct.
2. The method of claim 1, wherein step a4 further comprises:
and after an operation prompt tone is sent to the user to be trained, displaying a directional arrow to be selected in the path of the displayed virtual reality scene.
3. The method of claim 2, wherein step a5 further comprises:
after the user to be trained is sent out the selection error prompt tone, the selected error arrow in the direction arrows to be selected is not displayed to the user.
4. The method according to claim 2, wherein the step a3 of performing automatic walking according to the path information comprises:
and controlling the user role displayed in the virtual reality scene to automatically walk according to the path information by adopting the Navigation component.
5. The method of claim 4, wherein the user characters displayed in the virtual reality scene have rigid body properties and are transparent.
6. A space cognition ability training system based on VR equipment comprises VR equipment and an operating handle; the operating handle is used for sending an operating signal to VR equipment; a plurality of training scenes for training a user are pre-constructed in VR equipment, and each training scene has at least three training levels; the VR device executes the training method of any one of claims 1 to 5 according to the instructions of the operating handle.
7. The system of claim 6, wherein the VR device has a plurality of pre-constructed training scenarios for training a user, comprising: urban training scenes, supermarket training scenes, indoor training scenes, old town training scenes and maze training scenes;
wherein each of the training scenarios has: a simple training level, a medium training level, and a difficult training level.
CN201910775365.4A 2019-08-21 2019-08-21 VR-based spatial cognitive ability training method and system Pending CN110673719A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910775365.4A CN110673719A (en) 2019-08-21 2019-08-21 VR-based spatial cognitive ability training method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910775365.4A CN110673719A (en) 2019-08-21 2019-08-21 VR-based spatial cognitive ability training method and system

Publications (1)

Publication Number Publication Date
CN110673719A true CN110673719A (en) 2020-01-10

Family

ID=69075420

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910775365.4A Pending CN110673719A (en) 2019-08-21 2019-08-21 VR-based spatial cognitive ability training method and system

Country Status (1)

Country Link
CN (1) CN110673719A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888934A (en) * 2021-11-11 2022-01-04 上海市养志康复医院(上海市阳光康复中心) Aphasia rehabilitation training system and training method based on VR visual and auditory guidance
CN114582192A (en) * 2022-03-14 2022-06-03 宁夏安之信工程设计有限公司 Industrial fire accident simulation system based on virtual reality and use method
CN114708946A (en) * 2022-03-22 2022-07-05 北京蓝田医疗设备有限公司 Target guidance ability special training method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120108909A1 (en) * 2010-11-03 2012-05-03 HeadRehab, LLC Assessment and Rehabilitation of Cognitive and Motor Functions Using Virtual Reality
CN103268392A (en) * 2013-04-15 2013-08-28 福建中医药大学 Cognitive function training system for scene interaction and application method thereof
CN106693280A (en) * 2016-12-29 2017-05-24 深圳市臻络科技有限公司 Virtual-reality-based Parkinsonism training method, system and device
WO2017141166A1 (en) * 2016-02-19 2017-08-24 Hicheur Halim Device for assessing and training the perceptual, cognitive, and motor performance, and method thereof
CN107519622A (en) * 2017-08-21 2017-12-29 南通大学 Spatial cognition rehabilitation training system and method based on virtual reality and the dynamic tracking of eye
CN109616193A (en) * 2018-12-21 2019-04-12 杭州颐康医疗科技有限公司 A kind of virtual reality cognitive rehabilitation method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120108909A1 (en) * 2010-11-03 2012-05-03 HeadRehab, LLC Assessment and Rehabilitation of Cognitive and Motor Functions Using Virtual Reality
CN103268392A (en) * 2013-04-15 2013-08-28 福建中医药大学 Cognitive function training system for scene interaction and application method thereof
WO2017141166A1 (en) * 2016-02-19 2017-08-24 Hicheur Halim Device for assessing and training the perceptual, cognitive, and motor performance, and method thereof
CN106693280A (en) * 2016-12-29 2017-05-24 深圳市臻络科技有限公司 Virtual-reality-based Parkinsonism training method, system and device
CN107519622A (en) * 2017-08-21 2017-12-29 南通大学 Spatial cognition rehabilitation training system and method based on virtual reality and the dynamic tracking of eye
CN109616193A (en) * 2018-12-21 2019-04-12 杭州颐康医疗科技有限公司 A kind of virtual reality cognitive rehabilitation method and system

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113888934A (en) * 2021-11-11 2022-01-04 上海市养志康复医院(上海市阳光康复中心) Aphasia rehabilitation training system and training method based on VR visual and auditory guidance
CN114582192A (en) * 2022-03-14 2022-06-03 宁夏安之信工程设计有限公司 Industrial fire accident simulation system based on virtual reality and use method
CN114708946A (en) * 2022-03-22 2022-07-05 北京蓝田医疗设备有限公司 Target guidance ability special training method and device
CN114708946B (en) * 2022-03-22 2022-10-11 北京蓝田医疗设备有限公司 Target guidance ability special training method and device

Similar Documents

Publication Publication Date Title
CN110673719A (en) VR-based spatial cognitive ability training method and system
CN111750878B (en) Vehicle pose correction method and device
KR102333984B1 (en) How to reposition the robot
CN107272680A (en) A kind of automatic follower method of robot based on ROS robot operating systems
CN107633317A (en) Establish stroke planning model and cook up the method, apparatus of parade journey
CN101755293A (en) Method for guiding crossroad using point of interest and navigation system
CN108304324A (en) Method for generating test case, device, equipment and storage medium
CN101065644A (en) Navigation device, method, and program
CN111750881B (en) Vehicle pose correction method and device based on light pole
EP3570150B1 (en) Interactive globe
CN107886717B (en) Parking guidance method and device, computing equipment and storage medium
CN111750882B (en) Method and device for correcting vehicle pose during initialization of navigation map
CN113804204A (en) Driving method and device applied to vehicle, electronic equipment and storage medium
CN104977016B (en) Navigation processing method and mobile intelligent terminal
CN101509779A (en) Image navigation method and navigation equipment
JP4571227B2 (en) Route search device, route search method and program
JP6202799B2 (en) Navigation device
CN110081897B (en) Restrictive riding path planning device and method
CN105910613B (en) A kind of adaptive walking navigation method and system based on virtual reality
JP5119504B2 (en) Pass determination device, pass determination method, program
CN112706776B (en) Method and device for determining road calibration data, electronic equipment and storage medium
JPH0559431B2 (en)
CN114964287A (en) Lane positioning method, computer device, computer-readable storage medium, and vehicle
CN117542205B (en) Lane guiding method, device, equipment and storage medium
CN101676688A (en) Navigating device capable of setting automatically and newly added road sections into navigating plan

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200110