CN107328424B - Navigation method and device - Google Patents

Navigation method and device Download PDF

Info

Publication number
CN107328424B
CN107328424B CN201710565073.9A CN201710565073A CN107328424B CN 107328424 B CN107328424 B CN 107328424B CN 201710565073 A CN201710565073 A CN 201710565073A CN 107328424 B CN107328424 B CN 107328424B
Authority
CN
China
Prior art keywords
distance
information
image
obstacle
navigation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710565073.9A
Other languages
Chinese (zh)
Other versions
CN107328424A (en
Inventor
赵文芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics China R&D Center
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics China R&D Center, Samsung Electronics Co Ltd filed Critical Samsung Electronics China R&D Center
Priority to CN201710565073.9A priority Critical patent/CN107328424B/en
Publication of CN107328424A publication Critical patent/CN107328424A/en
Application granted granted Critical
Publication of CN107328424B publication Critical patent/CN107328424B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/3407Route searching; Route guidance specially adapted for specific applications
    • G01C21/3415Dynamic re-routing, e.g. recalculating the route when the user deviates from calculated route or after detecting real-time traffic data or accidents
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H3/06Walking aids for blind persons
    • A61H3/061Walking aids for blind persons with electronic detecting or guiding means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/265Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network constructional aspects of navigation devices, e.g. housings, mountings, displays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/28Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network with correlation of data from several navigational instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3602Input other than that of destination using image analysis, e.g. detection of road signs, lanes, buildings, real preceding vehicles using a camera
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3626Details of the output of route guidance instructions
    • G01C21/3629Guidance using speech or audio output, e.g. text-to-speech
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/26Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
    • G01C21/34Route searching; Route guidance
    • G01C21/36Input/output arrangements for on-board computers
    • G01C21/3691Retrieval, searching and output of information related to real-time traffic, weather, or environmental conditions
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Epidemiology (AREA)
  • Environmental & Geological Engineering (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Optics & Photonics (AREA)
  • Atmospheric Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Ecology (AREA)
  • Pain & Pain Management (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Rehabilitation Therapy (AREA)
  • Animal Behavior & Ethology (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Navigation (AREA)

Abstract

The application discloses a navigation method and a navigation device, wherein a specific implementation mode of the method comprises the following steps: acquiring position information of a current position to determine initial navigation path information from the current position to a preset destination; analyzing the data acquired by the attitude information acquisition device, determining the current forward direction of the user wearing the augmented reality glasses, analyzing the image acquired by the image acquisition device and the data acquired by the distance measurement device, and determining the barrier region and the free movement region in the forward direction; updating the initial path information based on the obstacle area and the free movement area to obtain target navigation path information; and converting the target navigation path information into a voice signal and outputting the voice signal. This embodiment increases the flexibility of navigation.

Description

Navigation method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a navigation method and apparatus.
Background
The blind faces many problems in daily life, one of which is autonomous walking. Thus, navigation to the blind is essential.
Generally, the blind can use a guide dog, a blind road or an intelligent navigation device to assist navigation. However, the blind guiding dog is usually rare, blind roads in many cities are not perfectly arranged, and the intelligent navigation equipment mainly aims at navigating people with normal vision and cannot flexibly identify obstacles, so that the existing navigation mode has the problem of low flexibility.
Disclosure of Invention
It is an object of embodiments of the present application to provide an improved method and apparatus to solve the technical problems mentioned in the background section above.
In a first aspect, an embodiment of the present application provides a navigation method for augmented reality glasses, where the augmented reality glasses are installed with an image acquisition device, a posture information acquisition device, and a distance measurement device, and the method includes: acquiring position information of a current position to determine initial navigation path information from the current position to a preset destination; analyzing the data acquired by the attitude information acquisition device, determining the current advancing direction of the user wearing the augmented reality glasses, analyzing the image acquired by the image acquisition device and the data acquired by the distance measurement device, and determining an obstacle area and a free movement area in the advancing direction; updating initial path information based on the barrier area and the free movement area to obtain target navigation path information; and converting the target navigation path information into a voice signal and outputting the voice signal.
In some embodiments, obtaining the position information of the current position and the initial navigation path information of the current position to the preset destination includes: acquiring initial position information of a current position based on a global positioning system; acquiring an image acquired by an image acquisition device, analyzing the image and generating road condition information; the method comprises the steps of sending initial position information and road condition information to a cloud server supporting augmented reality glasses, and receiving position information returned by the cloud server and initial navigation path information of a current position to a preset destination determined by the cloud server based on the position information and destination information received in advance.
In some embodiments, analyzing the image collected by the image collecting device and the data collected by the distance measuring device to determine the obstacle area and the free movement area in the forward direction includes: analyzing the image acquired by the image acquisition device to determine an obstacle in the forward direction; analyzing the data collected by the distance measuring device, and determining the distance between the distance measuring device and the obstacle and the speed of the obstacle; based on the grid algorithm, distance, speed and image, a grid map is generated containing the obstacle area, the free movement area and the uncertain area.
In some embodiments, after analyzing the data collected by the ranging device, determining the distance to the obstacle and the speed of the obstacle, the method further comprises: in response to determining that the distance is less than a preset distance threshold, determining a preset volume of a warning sound matched with the distance; and outputting the warning sound of the volume.
In some embodiments, after analyzing the data collected by the ranging device, determining the distance to the obstacle and the speed of the obstacle, the method further comprises: matching the distance with at least one preset distance interval; and determining the distance interval matched with the distance as a target distance interval, and outputting a preset warning sound with the volume matched with the target distance interval.
In some embodiments, the method further comprises: responding to a received voice signal of a user, analyzing the voice signal and generating a query instruction; executing the query operation indicated by the query instruction and generating a query result; and converting the operation result into a voice signal for outputting.
In some embodiments, the image acquisition device is a camera, the pose information acquisition device is a micromechanical gyroscope, and the distance measurement device is a laser sensor.
In a second aspect, an embodiment of the present application provides a navigation device for augmented reality glasses, install image acquisition device, attitude information acquisition device and range unit in the augmented reality glasses, the device includes: the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is configured to acquire position information of a current position so as to determine initial navigation path information from the current position to a preset destination; the analysis unit is configured to analyze the data acquired by the attitude information acquisition device, determine the current advancing direction of the user wearing the augmented reality glasses, analyze the image acquired by the image acquisition device and the data acquired by the distance measurement device, and determine an obstacle area and a free movement area in the advancing direction; the updating unit is configured to update the initial path information based on the obstacle area and the free movement area to obtain target navigation path information; and the first output unit is configured to convert the target navigation path information into a voice signal and output the voice signal.
In some embodiments, the obtaining unit comprises: the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to acquire initial position information of a current position based on a global positioning system; the second acquisition module is configured to acquire the image acquired by the image acquisition device, analyze the image and generate road condition information; and the third acquisition module is configured to send the initial position information and the road condition information to a cloud server supporting the augmented reality glasses, and receive the position information returned by the cloud server and initial navigation path information from the current position to a preset destination, which is determined by the cloud server based on the position information and the destination information received in advance.
In some embodiments, the parsing unit is further configured to: analyzing the image acquired by the image acquisition device to determine an obstacle in the forward direction; analyzing the data collected by the distance measuring device, and determining the distance between the distance measuring device and the obstacle and the speed of the obstacle; based on the grid algorithm, distance, speed and image, a grid map is generated containing the obstacle area, the free movement area and the uncertain area.
In some embodiments, the apparatus further comprises: and the second output unit is configured to respond to the fact that the determined distance is smaller than the preset distance threshold value, determine the preset volume of the warning sound matched with the distance, and output the warning sound of the volume.
In some embodiments, the apparatus further comprises: and the third output unit is configured to match the distance with at least one preset distance interval, determine the distance interval matched with the distance as a target distance interval, and output a preset warning sound with the volume matched with the target distance interval.
In some embodiments, the apparatus further comprises: and the fourth output unit is configured to respond to the received voice signal of the user, analyze the voice signal, generate a query instruction, execute a query operation indicated by the query instruction, generate a query result, and convert the operation result into the voice signal for output.
In some embodiments, the image acquisition device is a camera, the pose information acquisition device is a micromechanical gyroscope, and the distance measurement device is a laser sensor.
In a third aspect, an embodiment of the present application provides augmented reality glasses, including: one or more processors; the image acquisition device is used for acquiring images; the attitude information acquisition device is used for acquiring attitude information; a distance measuring device for measuring distance and speed; a storage device for storing one or more programs which, when executed by one or more processors, cause the one or more processors to implement a method as in any embodiment of the navigation method.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium on which a computer program is stored, which when executed by a processor, implements a method as in any of the embodiments of the navigation method.
According to the navigation method and the navigation device, the position information of the current position and the initial navigation path information are acquired; then determining the current forward direction of the user and determining an obstacle region and a free movement region of the forward direction; then updating the initial path information based on the barrier area and the free movement area to obtain target navigation path information; and finally, the target navigation path information is converted into a voice signal, and the voice signal is output, so that the blind can navigate by using the voice blind under the condition of no blind guiding dog or blind road, and the barrier can be detected, thereby improving the navigation flexibility.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow diagram of one embodiment of a navigation method according to the present application;
FIG. 3 is a schematic diagram of an application scenario of a navigation method according to the present application;
FIG. 4 is a flow chart of yet another embodiment of a navigation method according to the present application;
FIG. 5 is a schematic structural diagram of one embodiment of a navigation device according to the present application;
fig. 6 is a schematic block diagram of a computer system suitable for implementing augmented reality glasses according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 shows an exemplary system architecture 100 to which the navigation method or navigation device of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include augmented reality glasses 101, a network 102, and a server 103. Network 102 is the medium used to provide a communication link between augmented reality glasses 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
Augmented reality glasses 101 may interact with server 103 over network 102 to receive or send messages and the like. The augmented reality glasses 101 may be mounted with various devices, such as an image acquisition device (e.g., various types of cameras), a posture information acquisition device (e.g., Micro Electro Mechanical Systems (MEMS), an Inertial Measurement Unit (IMU)), a distance Measurement device (e.g., a laser sensor, a distance meter, etc.), a microphone, a speaker, and the like.
The server 103 may be a server that provides various services, such as a cloud server that supports the augmented reality glasses 101. The cloud server may receive information (e.g., traffic information, etc.) sent by the augmented reality glasses 101, process the received information, and return a processing result (e.g., accurate location information, etc.) to the augmented reality glasses 101.
It should be noted that the navigation method provided in the embodiment of the present application is generally executed by the augmented reality glasses 101, and accordingly, the navigation device is generally disposed in the augmented reality glasses 101.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of augmented reality glasses, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a navigation method for augmented reality glasses according to the present application is shown. The augmented reality glasses are provided with an image acquisition device, a posture information acquisition device and a distance measuring device, and the navigation method comprises the following steps:
step 201, acquiring position information of a current position and initial navigation path information from the current position to a preset destination.
In this embodiment, the electronic device (for example, the augmented reality glasses 101 shown in fig. 1) on which the control method operates may perform positioning based on various existing positioning manners to acquire the position information of the current position. As examples, the Positioning method may include Positioning using a Global Positioning System (GPS), Positioning using a BeiDou Navigation Satellite System (BDS), Positioning using an Assisted Global Positioning System (AGPS), Positioning using a GLONASS (GLONASS), Positioning using a base station (LBS), and the like. It should be noted that the above positioning manners are well-known technologies that are widely researched and applied at present, and are not described herein again.
It should be noted that the location information may be geographic coordinates of the current location. The preset destination may be any place preset by the user wearing the electronic device, such as a park, a hospital, and the like. The initial navigation path information may include a path from the current position to the destination, coordinates of each inflection point, a direction, and the like.
Step 202, analyzing the data collected by the attitude information collecting device, determining the current advancing direction of the user, analyzing the image collected by the image collecting device and the data collected by the distance measuring device, and determining the barrier area and the free movement area in the advancing direction.
In this embodiment, the electronic device may be worn on the head of the user. The electronic device may first acquire data acquired by the installed attitude information acquisition device, an image acquired by the installed image acquisition device, and data acquired by the installed distance measurement device. Then, the electronic device may analyze the data acquired by the posture information acquisition device to determine a current forward direction of the user, and may analyze the image acquired by the image acquisition device and the data acquired by the distance measurement device to determine an obstacle region and a free movement region in the forward direction. The attitude information collecting device may collect the attitude information of the electronic device in real time, and the electronic device may determine the current forward direction of the user based on the attitude information. Here, the method of determining the heading direction based on the posture information is a well-known technique that is currently widely studied and applied, and is not described herein again. It should be noted that the image capturing device may capture images at a fixed frequency or a variable frequency. The distance measuring device can periodically acquire the distance and the speed of an object in a measuring range.
In this embodiment, after determining the forward traveling direction, the electronic device may determine the obstacle region and the free movement region according to the following steps: in a first step, the acquired image may be parsed based on image recognition techniques. Specifically, the electronic device may first pre-process the image. As an example, an image enhancement operation may be performed on the image using a gray level histogram, interference suppression, edge sharpening, pseudo color processing, or the like to increase the sharpness of the image. In addition, the electronic device may perform a color space conversion operation on the image. In practice, the color space of the target image may be any one of: RGB (red green blue) color space, HSV (hue saturation value) color space, and HSI (hue saturation Intensity) color space. The color space of the image is not limited to the above example, and may be another color space. The electronic device may further perform processing such as image coding compression, image restoration, image segmentation, image tilt correction, image graying, and image layering on the image. Second, after preprocessing the image, the electronic device may extract features (e.g., shapes, colors, edges, intersections, etc.) from the image using various image recognition techniques, and recognize each object in the image (e.g., recognize roads, buildings, billboards, street lamps, people, cars, trees, trash cans, cartons, etc.) based on a pre-trained image recognition model. The image recognition model may be generated by training a deep neural network by using a machine learning method. The electronic device may then determine the obstacle based on the identified respective objects and the respective physical locations in the image. For example, the identified boxes located in the center of the roadway may be determined to be an obstacle. And thirdly, the electronic equipment can determine the obstacle area and the free movement area based on the data (including distance, speed and the like) and the image acquired by the distance measuring device by using a multi-sensor fusion algorithm. In practice, the electronic device may determine the area where the road is located as a free movement area.
In some optional implementations of the present embodiment, after determining the forward direction, the electronic device may determine the obstacle region and the free movement region according to the following steps: first, the electronic device may analyze an image captured by the image capturing device to identify an obstacle in the forward direction. Then, the data collected by the distance measuring device can be analyzed to determine the distance to the obstacle and the speed of the obstacle. Finally, a grid map including the obstacle region, the free movement region, and the uncertain region may be generated based on the grid algorithm, the distance, the speed, and the image. The indeterminate region may be a region other than the obstacle region and the free movement region in the grid map.
In some optional implementations of the embodiment, in response to determining that the distance between the electronic device and the obstacle is smaller than a preset distance threshold, the electronic device may determine a preset volume of a warning sound matching the distance; then, a warning sound of the above volume may be output. It should be noted that, the electronic device may store a corresponding relationship (for example, a linear relationship) between the volume and the distance in advance, and the smaller the distance is, the larger the volume is.
In some optional implementation manners of this embodiment, the electronic device may first match a distance between the electronic device and the obstacle with at least one preset distance interval, then determine the distance interval matched with the distance as a target distance interval, and output a preset warning sound with a volume matched with the target distance interval. It should be noted that the electronic device may store at least one preset distance interval in advance, for example, a distance interval from 5 meters to 10 meters, a distance interval from 2 meters to 5 meters, a distance interval from 1 meter to 2 meters, a distance interval from zero to 1 meter, and the like, and store the sound volume preset by the technician to match with each distance interval in advance.
In some optional implementation manners of this embodiment, the electronic device may further determine a three-dimensional size of the obstacle based on the distance acquired by the distance measuring device. And outputting the voice information with the three-bit size after a user sends a voice instruction for inquiring the obstacle information.
It should be noted that the method for determining an obstacle based on data acquired by the distance measuring device and the image acquisition device, the grid algorithm, and the like are well-known technologies widely studied and applied at present, and are not described herein again.
In some optional implementation manners of this embodiment, the image acquisition device may be a camera, the attitude information acquisition device may be a micromechanical gyroscope, and the distance measurement device may be a laser sensor.
And step 203, updating the initial path information based on the obstacle area and the free movement area to obtain target navigation path information.
In this embodiment, the electronic device may update the initial route information based on the obstacle area and the free movement area to obtain the target navigation route information. Specifically, the electronic device may locally adjust a route that needs to pass through an obstacle in the initial route information so as to avoid the obstacle and position the adjusted route in the free movement area. And then, replacing the original path with the adjusted path, and determining the initial path information after replacing the path as the target path information.
And step 204, converting the target navigation path information into a voice signal, and outputting the voice signal.
In this embodiment, the electronic device may convert the target navigation path information into a voice signal and output the voice signal. In practice, the user may receive the voice signal through a bluetooth headset or a wired headset. In addition, the electronic device may further include a speaker, and the electronic device may further output the voice information through the speaker.
In some optional implementations of this embodiment, in response to receiving a voice signal of a user (for example, querying a nearby hospital, querying a nearby mall, and the like), the electronic device analyzes the voice signal to generate a query instruction; then, the electronic device can execute the query operation indicated by the query instruction and generate a query result; and finally, converting the operation result into a voice signal for outputting.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the navigation method according to the present embodiment. In the application scenario of fig. 3, an image acquisition device, an attitude information acquisition device, and a distance measurement device are installed in the augmented reality glasses 301. The augmented reality glasses 301 first acquire position information 302 of a current position to determine initial navigation path information 303 of the current position to a preset destination. Then, the augmented reality glasses 301 analyze the data collected by the posture information collecting device to determine the current forward direction of the user wearing the augmented reality glasses, and analyze the image collected by the image collecting device and the data collected by the distance measuring device to determine the obstacle region 304 and the free movement region 305 in the forward direction. Then, the augmented reality glasses 301 update the initial route information 303 based on the obstacle region 304 and the free movement region 305 to obtain the target navigation route information 306. Finally, the augmented reality glasses 301 convert the target navigation path information 306 into a voice signal 307, and output the voice signal 307.
In the method provided by the above embodiment of the present application, the position information of the current position and the initial navigation path information are obtained; then determining the current forward direction of the user and determining an obstacle region and a free movement region of the forward direction; then updating the initial path information based on the barrier area and the free movement area to obtain target navigation path information; and finally, the target navigation path information is converted into a voice signal, and the voice signal is output, so that the blind can navigate by using the voice under the condition of no blind guiding dog or blind road, and can detect the barrier, thereby improving the navigation flexibility.
With further reference to FIG. 4, a flow 400 of yet another embodiment of a navigation method is shown. The process 400 of the navigation method, in which the augmented reality glasses are provided with the image acquisition device, the attitude information acquisition device and the distance measurement device, includes the following steps:
step 401, acquiring initial position information of a current position based on a global positioning system.
In this embodiment, the electronic device (for example, the augmented reality glasses 101 shown in fig. 1) on which the control method operates may perform positioning of the current location by using a global positioning system, and obtain initial location information of the current location.
Step 402, acquiring the image acquired by the image acquisition device, analyzing the image, and generating road condition information.
In this embodiment, the electronic device may acquire an image acquired by the installed image acquisition device, analyze the image, and generate the traffic information. The road condition information may include information such as building names and street names. Here, a road sign, a building sign, or the like of a building and/or a street may be displayed in the image. Specifically, the electronic device may generate the traffic information through the following steps:
in a first step, the electronic device may first pre-process the image. As an example, an image enhancement operation may be performed on the image by using a gray level histogram, interference suppression, edge sharpening, pseudo color processing, or the like to increase the sharpness of the image; color space transformation operations may also be performed on the images; the image may be subjected to image coding compression, image restoration, image segmentation, image tilt correction, image graying, image layering, and the like.
And secondly, the electronic equipment can identify the images by utilizing various image identification technologies to generate road condition information. As an example, the electronic device may first extract features (e.g., shapes, colors, edges, intersections, etc.) from the image and identify various objects in the image (e.g., identify road signs, billboards, building signs, etc.) based on a pre-trained image recognition model. The image recognition model may be generated by training a deep neural network by using a machine learning method. Then, the electronic device may recognize the recognized characters in the road sign, the direction board, the billboard, and the building sign by using an OCR (Optical Character Recognition) technique to obtain the traffic information. Specifically, the electronic device may first perform detection to determine a character shape; thereafter, the character shape can be translated into a computer word using various character recognition methods (e.g., Euclidean space alignment, dynamic program alignment, neural network-based character alignment, etc.). It should be noted that the above optical character recognition method and image recognition technology are well-known technologies that are widely researched and applied at present, and are not described herein again.
Step 403, sending the initial position information and the traffic information to a cloud server supporting the augmented reality glasses, and receiving position information returned by the cloud server and initial navigation path information from the current position to a preset destination, which is determined by the cloud server based on the position information and pre-received destination information.
In this embodiment, the electronic device may send the initial position information and the traffic information to a cloud server (e.g., the server 103 shown in fig. 1) supporting the augmented reality glasses through a wired connection manner or a wireless connection manner, and receive position information returned by the cloud server and initial navigation path information from the current position to a preset destination determined by the cloud server based on the position information and destination information received in advance. It should be noted that the wireless connection means may include, but is not limited to, a 3G/4G connection, a WiFi connection, a bluetooth connection, a WiMAX connection, a Zigbee connection, a uwb (ultra wideband) connection, and other wireless connection means now known or developed in the future. In practice, the server may determine accurate location information of a current location based on the initial location information and the traffic information, and then determine initial navigation path information from the current location to a preset destination based on the determined location information and the destination information received in advance.
And step 404, analyzing the data acquired by the attitude information acquisition device, determining the current advancing direction of the user, analyzing the image acquired by the image acquisition device and the data acquired by the distance measurement device, and determining the barrier area and the free movement area in the advancing direction.
In this embodiment, the electronic device may be worn on the head of the user. The electronic device may first acquire data acquired by the installed attitude information acquisition device, an image acquired by the installed image acquisition device, and data acquired by the installed distance measurement device. Then, the electronic device may analyze the data acquired by the posture information acquisition device to determine a current forward direction of the user, and may analyze the image acquired by the image acquisition device and the data acquired by the distance measurement device to determine an obstacle region and a free movement region in the forward direction. Specifically, after determining the forward direction, the electronic device may determine the obstacle region and the free movement region according to the following steps: first, the electronic device may analyze an image captured by the image capturing device to identify an obstacle in the forward direction. Then, the data collected by the distance measuring device can be analyzed to determine the distance to the obstacle and the speed of the obstacle. Finally, a grid map including the obstacle region, the free movement region, and the uncertain region may be generated based on the grid algorithm, the distance, the speed, and the image. As an example, the distance, the velocity, and the image may be fused, calculated, and the like by using a multi-sensor fusion algorithm, and then a grid map including the obstacle region, the free movement region, and the uncertain region may be determined and generated based on a grid algorithm. The indeterminate region may be a region other than the obstacle region and the free movement region in the grid map.
Step 405, updating the initial path information based on the obstacle area and the free movement area to obtain the target navigation path information.
In this embodiment, the electronic device may update the initial route information based on the obstacle area and the free movement area to obtain the target navigation route information. Specifically, the electronic device may locally adjust a route that needs to pass through an obstacle in the initial route information so as to avoid the obstacle and position the adjusted route in the free movement area. And then, replacing the original path with the adjusted path, and determining the initial path information after replacing the path as the target path information.
Step 406, converting the target navigation path information into a voice signal, and outputting the voice signal.
In this embodiment, the electronic device may convert the target navigation path information into a voice signal and output the voice signal. In practice, the user may receive the voice signal through a bluetooth headset or a wired headset. In addition, the electronic device may further include a speaker, and the electronic device may further output the voice information through the speaker.
It should be noted that the operations of the steps 403-405 are substantially the same as the operations of the steps 202-204, and are not described herein again.
As can be seen from fig. 4, compared with the embodiment corresponding to fig. 2, the process 400 of the control method in this embodiment highlights the step of determining the position information and the initial navigation path information based on the road condition information. Therefore, the scheme described in the embodiment can determine more accurate position information, and improves the accuracy of navigation.
With further reference to fig. 5, as an implementation of the method shown in the above-mentioned figures, the present application provides an embodiment of a navigation device, which corresponds to the embodiment of the method shown in fig. 2, and which can be specifically applied to augmented reality glasses, wherein the augmented reality glasses are installed with an image acquisition device, a posture information acquisition device, and a distance measurement device.
As shown in fig. 5, the navigation device 500 of the present embodiment includes: an obtaining unit 501, configured to obtain location information of a current location to determine initial navigation path information from the current location to a preset destination; an analyzing unit 502 configured to analyze data acquired by the posture information acquiring device, determine a current forward direction of the user wearing the augmented reality glasses, analyze an image acquired by the image acquiring device and data acquired by the distance measuring device, and determine an obstacle region and a free movement region in the forward direction; an updating unit 503 configured to update the initial route information based on the obstacle area and the free movement area to obtain target navigation route information; the first output unit 504 is configured to convert the target navigation path information into a voice signal and output the voice signal.
In this embodiment, the obtaining unit 501 may perform positioning based on various existing positioning manners to obtain the position information of the current position. As an example, the positioning manner may include GPS positioning, BDS positioning, AGPS positioning, GLONASS positioning, LBS positioning, and the like.
In this embodiment, the analysis unit 502 may first acquire data acquired by the installed attitude information acquisition device, an image acquired by the installed image acquisition device, and data acquired by the installed distance measurement device. Then, the data collected by the attitude information collection device can be analyzed to determine the current advancing direction of the user, and the image collected by the image collection device and the data collected by the distance measurement device can be analyzed to determine the obstacle area and the free movement area in the advancing direction.
In this embodiment, the updating unit 503 may update the initial route information based on the obstacle area and the free movement area to obtain the target navigation route information. Specifically, the route that needs to pass through the obstacle in the initial route information may be locally adjusted to avoid the obstacle and to locate the adjusted route in the free movement area. And then, replacing the original path with the adjusted path, and determining the initial path information after replacing the path as the target path information.
In this embodiment, the first output unit 504 may convert the target navigation path information into a voice signal and output the voice signal. In practice, the user may receive the voice signal through a bluetooth headset or a wired headset. In addition, the electronic device may further include a speaker, and the electronic device may further output the voice information through the speaker.
In some optional implementation manners of this embodiment, the obtaining unit 501 may include a first obtaining module, a second obtaining module, and a third obtaining module (not shown in the figure). The first obtaining module may be configured to obtain initial position information of a current position based on a global positioning system. The second obtaining module may be configured to obtain an image collected by the image collecting device, analyze the image, and generate road condition information. The third obtaining module may be configured to send the initial position information and the traffic information to a cloud server supporting the augmented reality glasses, and receive position information returned by the cloud server and initial navigation path information from the current position to a preset destination, which is determined by the cloud server based on the position information and destination information received in advance.
In some optional implementation manners of this embodiment, the analyzing unit 502 may be further configured to analyze an image acquired by the image acquisition device to determine an obstacle in the forward direction; analyzing the data collected by the distance measuring device, and determining the distance between the distance measuring device and the obstacle and the speed of the obstacle; generating a grid map including the obstacle region, the free movement region, and the uncertain region based on a grid algorithm, the distance, the speed, and the image.
In some optional implementations of the present embodiment, the navigation device 500 may further include a second output unit (not shown in the figure). The second output unit may be configured to determine a preset volume of a warning sound matching the distance in response to determining that the distance is smaller than a preset distance threshold, and output the warning sound at the volume.
In some optional implementations of the present embodiment, the navigation device 500 further includes a third output unit (not shown in the figure). The third output unit may be configured to match the distance with at least one preset distance interval, determine a distance interval matching the distance as a target distance interval, and output a warning sound with a preset volume matching the target distance interval.
In some optional implementations of the present embodiment, the navigation device 500 may further include a fourth output unit (not shown in the figure). The fourth output unit may be configured to, in response to receiving a voice signal of a user, parse the voice signal, generate a query instruction, perform a query operation indicated by the query instruction, generate a query result, and convert the operation result into a voice signal for output.
In some optional implementation manners of this embodiment, the image acquisition device is a camera, the pose information acquisition device is a micro-mechanical gyroscope, and the distance measurement device is a laser sensor.
In the apparatus provided in the above embodiment of the present application, the obtaining unit 501 obtains the position information of the current position and the initial navigation path information; then, the analysis unit 502 determines the current forward direction of the user and determines an obstacle region and a free movement region in the forward direction; then, the updating unit 503 updates the initial route information based on the obstacle area and the free movement area to obtain target navigation route information; finally, the first output unit 504 converts the target navigation path information into a voice signal and outputs the voice signal, so that the blind can navigate by using the voice under the condition that no blind guiding dog or blind road exists, and the barrier can be detected, thereby improving the navigation flexibility.
Referring now to fig. 6, shown is a schematic block diagram of a computer system 600 suitable for implementing augmented reality glasses according to embodiments of the present application. The augmented reality glasses shown in fig. 6 are only an example, and should not bring any limitation to the functions and the use range of the embodiments of the present application.
As shown in fig. 6, the computer system 600 includes a Central Processing Unit (CPU)601 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage section 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the system 600 are also stored. The CPU 601, ROM 602, and RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
The following components are connected to the I/O interface 605: an input portion 606 including a microphone and the like; an output portion 607 including, a Liquid Crystal Display (LCD) and the like, a speaker and the like; a storage section 608 including a hard disk and the like; and a communication section 609 including a network interface card such as a LAN card, a modem, or the like. The communication section 609 performs communication processing via a network such as the internet. The driver 610 is also connected to the I/O interface 605 as needed. A removable medium 611 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 610 as necessary, so that a computer program read out therefrom is mounted in the storage section 608 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 609, and/or installed from the removable medium 611. The computer program performs the above-described functions defined in the method of the present application when executed by a Central Processing Unit (CPU) 601. It should be noted that the computer readable medium described herein can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a parsing unit, a update following unit, and a first output unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the acquisition unit may also be described as a "unit that acquires position information of the current position".
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be present separately and not assembled into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring position information of a current position to determine initial navigation path information from the current position to a preset destination; analyzing the data acquired by the attitude information acquisition device, determining the current forward direction of the user wearing the augmented reality glasses, analyzing the image acquired by the image acquisition device and the data acquired by the distance measurement device, and determining the barrier region and the free movement region in the forward direction; updating the initial path information based on the obstacle area and the free movement area to obtain target navigation path information; and converting the target navigation path information into a voice signal and outputting the voice signal.
The above description is only a preferred embodiment of the application and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention herein disclosed is not limited to the particular combination of features described above, but also encompasses other arrangements formed by any combination of the above features or their equivalents without departing from the spirit of the invention. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (14)

1. A navigation method for augmented reality glasses is characterized in that an image acquisition device, a posture information acquisition device and a distance measurement device are installed in the augmented reality glasses, and the method comprises the following steps:
acquiring position information of a current position to determine initial navigation path information from the current position to a preset destination;
analyzing the data collected by the attitude information collecting device to determine the current forward direction of the user wearing the augmented reality glasses, analyzing the image collected by the image collecting device and the data collected by the distance measuring device to determine the obstacle area and the free movement area in the forward direction, and the method comprises the following steps: analyzing the image acquired by the image acquisition device to determine the barrier in the forward direction;
analyzing the data collected by the distance measuring device, and determining the distance between the distance measuring device and the obstacle and the speed of the obstacle;
generating a grid map containing an obstacle region, a free movement region and an uncertain region based on a grid algorithm, the distance, the speed and the image;
updating the initial navigation path information based on the obstacle area and the free movement area to obtain target navigation path information;
and converting the target navigation path information into a voice signal, and outputting the voice signal.
2. The navigation method according to claim 1, wherein the obtaining of the position information of the current position and the initial navigation path information of the current position to a preset destination comprises:
acquiring initial position information of a current position based on a global positioning system;
acquiring an image acquired by the image acquisition device, analyzing the image and generating road condition information;
and sending the initial position information and the road condition information to a cloud server supporting the augmented reality glasses, and receiving position information returned by the cloud server and initial navigation path information from the current position to a preset destination determined by the cloud server based on the position information and destination information received in advance.
3. The navigation method according to claim 1, wherein after the analyzing the data collected by the ranging device to determine the distance to the obstacle and the speed of the obstacle, the method further comprises:
in response to determining that the distance is less than a preset distance threshold, determining a preset volume of a warning sound matching the distance;
and outputting the warning sound of the volume.
4. The navigation method according to claim 3, wherein after the analyzing the data collected by the ranging device to determine the distance to the obstacle and the speed of the obstacle, the method further comprises:
matching the distance with at least one preset distance interval;
and determining the distance interval matched with the distance as a target distance interval, and outputting a preset warning sound with the volume matched with the target distance interval.
5. The navigation method according to claim 3, further comprising:
responding to a received voice signal of a user, analyzing the voice signal and generating a query instruction;
executing the query operation indicated by the query instruction and generating a query result;
and converting the operation result into a voice signal for outputting.
6. The navigation method according to any one of claims 1 to 5, wherein the image acquisition device is a camera, the attitude information acquisition device is a micromechanical gyroscope, and the distance measurement device is a laser sensor.
7. The utility model provides a navigation head for augmented reality glasses, a serial communication port, install image acquisition device, gesture information acquisition device and range unit in the augmented reality glasses, the device includes:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is configured to acquire position information of a current position so as to determine initial navigation path information from the current position to a preset destination;
the analysis unit is configured to analyze the data acquired by the attitude information acquisition device, determine the current forward direction of the user wearing the augmented reality glasses, analyze the image acquired by the image acquisition device and the data acquired by the distance measurement device, determine an obstacle area and a free movement area in the forward direction, analyze the image acquired by the image acquisition device, and determine an obstacle in the forward direction; analyzing the data collected by the distance measuring device, and determining the distance between the distance measuring device and the obstacle and the speed of the obstacle; generating a grid map containing an obstacle region, a free movement region and an uncertain region based on a grid algorithm, the distance, the speed and the image;
an updating unit configured to update the initial navigation path information based on the obstacle region and the free movement region to obtain target navigation path information;
and the first output unit is configured to convert the target navigation path information into a voice signal and output the voice signal.
8. The navigation device according to claim 7, wherein the acquisition unit includes:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is configured to acquire initial position information of a current position based on a global positioning system;
the second acquisition module is configured to acquire the image acquired by the image acquisition device, analyze the image and generate road condition information;
and the third acquisition module is configured to send the initial position information and the road condition information to a cloud server supporting the augmented reality glasses, and receive position information returned by the cloud server and initial navigation path information from the current position to a preset destination, which is determined by the cloud server based on the position information and destination information received in advance.
9. The navigation device of claim 7, wherein the device further comprises:
and the second output unit is configured to respond to the fact that the distance is smaller than a preset distance threshold value, determine the volume of a preset warning sound matched with the distance, and output the warning sound of the volume.
10. The navigation device of claim 9, wherein the device further comprises:
and the third output unit is configured to match the distance with at least one preset distance interval, determine the distance interval matched with the distance as a target distance interval, and output a preset warning sound with volume matched with the target distance interval.
11. The navigation device of claim 9, wherein the device further comprises:
and the fourth output unit is configured to respond to the received voice signal of the user, analyze the voice signal, generate a query instruction, execute a query operation indicated by the query instruction, generate a query result, and convert the operation result into the voice signal for output.
12. The navigation device according to any one of claims 7 to 11, wherein the image acquisition device is a camera, the attitude information acquisition device is a micromechanical gyroscope, and the distance measurement device is a laser sensor.
13. An augmented reality eyeglass comprising:
one or more processors;
the image acquisition device is used for acquiring images;
the attitude information acquisition device is used for acquiring attitude information;
a distance measuring device for measuring distance and speed;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-6.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
CN201710565073.9A 2017-07-12 2017-07-12 Navigation method and device Active CN107328424B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710565073.9A CN107328424B (en) 2017-07-12 2017-07-12 Navigation method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710565073.9A CN107328424B (en) 2017-07-12 2017-07-12 Navigation method and device

Publications (2)

Publication Number Publication Date
CN107328424A CN107328424A (en) 2017-11-07
CN107328424B true CN107328424B (en) 2020-12-11

Family

ID=60196975

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710565073.9A Active CN107328424B (en) 2017-07-12 2017-07-12 Navigation method and device

Country Status (1)

Country Link
CN (1) CN107328424B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108109210B (en) * 2017-12-15 2019-04-16 广州德科投资咨询有限公司 A kind of scene generating method and intelligent glasses for automatic driving vehicle
CN108426578A (en) * 2017-12-29 2018-08-21 达闼科技(北京)有限公司 A kind of air navigation aid, electronic equipment and readable storage medium storing program for executing based on high in the clouds
CN108309708A (en) * 2018-01-23 2018-07-24 李思霈 Blind-man crutch
CN110096051B (en) * 2018-01-31 2024-04-09 北京京东乾石科技有限公司 Method and device for generating vehicle control command
CN109443346A (en) * 2018-10-29 2019-03-08 温州大学 Monitor navigation methods and systems
CN109621311A (en) * 2018-11-14 2019-04-16 深圳市热丽泰和生命科技有限公司 A kind of parkinsonism posture gait rehabilitation training method based on augmented reality
CN109938973A (en) * 2019-03-29 2019-06-28 北京易达图灵科技有限公司 A kind of visually impaired person's air navigation aid and system
US20220201428A1 (en) * 2019-04-17 2022-06-23 Apple Inc. Proximity Enhanced Location Query
CN110146095B (en) * 2019-05-23 2021-07-20 北京百度网讯科技有限公司 Method and device for navigation of visually impaired people, electronic equipment and computer readable medium
CN110208946A (en) * 2019-05-31 2019-09-06 京东方科技集团股份有限公司 A kind of wearable device and the exchange method based on wearable device
CN110496018A (en) * 2019-07-19 2019-11-26 努比亚技术有限公司 Method, wearable device and the storage medium of wearable device guide blind person
CN112445204B (en) * 2019-08-15 2023-09-26 长沙智能驾驶研究院有限公司 Object movement navigation method and device in construction site and computer equipment
WO2021136967A2 (en) * 2020-01-03 2021-07-08 Mobileye Vision Technologies Ltd. Navigation systems and methods for determining object dimensions
CN111110530A (en) * 2020-01-09 2020-05-08 韩凤明 Radar undershirt and system of intelligence trip
CN111442758A (en) * 2020-03-27 2020-07-24 云南电网有限责任公司玉溪供电局 Laser speed and distance measuring device and method with distance increasing and telescoping functions
CN111494175B (en) * 2020-05-11 2021-11-30 清华大学 Navigation method based on head-mounted equipment, head-mounted equipment and storage medium
CN111595346B (en) * 2020-06-02 2022-04-01 浙江商汤科技开发有限公司 Navigation reminding method and device, electronic equipment and storage medium
CN112765302B (en) * 2021-01-28 2022-10-14 腾讯科技(深圳)有限公司 Method and device for processing position information and computer readable medium
CN113776551A (en) * 2021-09-27 2021-12-10 北京乐驾科技有限公司 Navigation method and device based on augmented reality glasses, glasses and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103439972A (en) * 2013-08-06 2013-12-11 重庆邮电大学 Path planning method of moving robot under dynamic and complicated environment
CN105043376A (en) * 2015-06-04 2015-11-11 上海物景智能科技有限公司 Intelligent navigation method and system applicable to non-omnidirectional moving vehicle
CN105955273A (en) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 Indoor robot navigation system and method
CN106289290A (en) * 2016-07-21 2017-01-04 触景无限科技(北京)有限公司 A kind of path guiding system and method
CN106843491A (en) * 2017-02-04 2017-06-13 上海肇观电子科技有限公司 Smart machine and electronic equipment with augmented reality

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101886928A (en) * 2009-05-14 2010-11-17 深圳富泰宏精密工业有限公司 Portable electronic device with guiding function
KR20130086861A (en) * 2012-01-26 2013-08-05 이문기 Guide device for blind people using electronic stick and smartphone
KR20160102872A (en) * 2015-02-23 2016-08-31 한국전자통신연구원 The street guidance information database construction method, and the appratus and method for guiding a blind person using the street guidance information database
CN106236525A (en) * 2016-09-23 2016-12-21 河海大学常州校区 A kind of voice guide method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103439972A (en) * 2013-08-06 2013-12-11 重庆邮电大学 Path planning method of moving robot under dynamic and complicated environment
CN105043376A (en) * 2015-06-04 2015-11-11 上海物景智能科技有限公司 Intelligent navigation method and system applicable to non-omnidirectional moving vehicle
CN105955273A (en) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 Indoor robot navigation system and method
CN106289290A (en) * 2016-07-21 2017-01-04 触景无限科技(北京)有限公司 A kind of path guiding system and method
CN106843491A (en) * 2017-02-04 2017-06-13 上海肇观电子科技有限公司 Smart machine and electronic equipment with augmented reality

Also Published As

Publication number Publication date
CN107328424A (en) 2017-11-07

Similar Documents

Publication Publication Date Title
CN107328424B (en) Navigation method and device
US11579307B2 (en) Method and apparatus for detecting obstacle
KR102434580B1 (en) Method and apparatus of dispalying virtual route
CN108571974B (en) Vehicle positioning using a camera
US20220221295A1 (en) Generating navigation instructions
JP2020520493A (en) Road map generation method, device, electronic device and computer storage medium
CN109583415A (en) A kind of traffic lights detection and recognition methods merged based on laser radar with video camera
CN111368605A (en) Lane line extraction method and device
CN111461981B (en) Error estimation method and device for point cloud stitching algorithm
CN111339876B (en) Method and device for identifying types of areas in scene
CN110717918B (en) Pedestrian detection method and device
US10515293B2 (en) Method, apparatus, and system for providing skip areas for machine learning
EP3594852B1 (en) Method, apparatus, and system for constructing a polyline from line segments
CN109508579B (en) Method and device for acquiring virtual point cloud data
CN110696826B (en) Method and device for controlling a vehicle
EP3644013A1 (en) Method, apparatus, and system for location correction based on feature point correspondence
US11724721B2 (en) Method and apparatus for detecting pedestrian
US20180293980A1 (en) Visually impaired augmented reality
CN111353453A (en) Obstacle detection method and apparatus for vehicle
US20200272847A1 (en) Method, apparatus, and system for generating feature correspondence from camera geometry
CN112258568B (en) High-precision map element extraction method and device
CN112651991A (en) Visual positioning method, device and computer system
CN115077539A (en) Map generation method, device, equipment and storage medium
US20220004777A1 (en) Information processing apparatus, information processing system, information processing method, and program
CN112099481A (en) Method and system for constructing road model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant