WO2018188479A1 - 基于增强现实的导航方法及装置 - Google Patents

基于增强现实的导航方法及装置 Download PDF

Info

Publication number
WO2018188479A1
WO2018188479A1 PCT/CN2018/080764 CN2018080764W WO2018188479A1 WO 2018188479 A1 WO2018188479 A1 WO 2018188479A1 CN 2018080764 W CN2018080764 W CN 2018080764W WO 2018188479 A1 WO2018188479 A1 WO 2018188479A1
Authority
WO
WIPO (PCT)
Prior art keywords
navigation
angle
coordinates
module
window
Prior art date
Application number
PCT/CN2018/080764
Other languages
English (en)
French (fr)
Inventor
谢荣平
王娜
王梁宇
郭宇嘉
Original Assignee
深圳大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳大学 filed Critical 深圳大学
Publication of WO2018188479A1 publication Critical patent/WO2018188479A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/003Navigation within 3D models or images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Definitions

  • the present invention relates to the field of navigation technologies, and in particular, to a navigation method and apparatus based on augmented reality.
  • the main purpose of the present invention is to provide a navigation method and device based on augmented reality, which aims to solve the problem of the user from manually switching from a two-dimensional map to a three-dimensional scene or a three-dimensional scene to a two-dimensional map.
  • augmented reality which aims to solve the problem of the user from manually switching from a two-dimensional map to a three-dimensional scene or a three-dimensional scene to a two-dimensional map.
  • a first aspect of the present invention provides a navigation method based on augmented reality, the method comprising:
  • a pitch angle and a roll angle based on the acceleration value and the magnetic force value, wherein the pitch angle is an angle at which the navigation device swings back and forth, and the roll angle is an angle at which the navigation device swings left and right;
  • the navigation mode switching is performed.
  • a second aspect of the present invention provides an augmented reality based navigation device, the device comprising:
  • a first acquiring module configured to acquire an acceleration value sensed by the acceleration sensor and a magnetic value sensed by the magnetic sensor
  • a first determining module configured to determine a pitch angle and a roll angle based on the acceleration value and the magnetic force value, wherein the pitch angle is an angle at which the navigation device swings back and forth, and the roll angle is a left-right swing of the navigation device angle;
  • a switching module configured to perform a navigation mode switching if the pitch angle and the current navigation mode satisfy the first preset condition, and/or the roll angle and the current navigation mode satisfy the second preset condition.
  • the present invention provides an augmented reality-based navigation method.
  • the embodiment of the present invention obtains an acceleration value sensed by an acceleration sensor and a magnetic value sensed by a magnetic sensor, based on the acceleration value and the magnetic value. Determining a pitch angle and a roll angle, wherein the pitch angle is an angle at which the navigation device swings back and forth, the roll angle being an angle of the left and right swing of the navigation device, based on an operation of the navigation device by the user, thereby adjusting the pitch angle or the The angle of the roll angle, if the pitch angle and the current navigation mode satisfy the first preset condition, or the roll angle and the current navigation mode satisfy the second preset condition, the navigation mode is switched, and the user does not need to manually Switch to improve the user experience.
  • FIG. 1 is a schematic flowchart diagram of a navigation method based on augmented reality according to a first embodiment of the present invention
  • Figure 2 shows the initial state of the OpenGL coordinate system
  • FIG. 3 is a schematic flowchart diagram of a navigation method based on augmented reality according to a second embodiment of the present invention
  • Figure 4 is a schematic view showing the extent of drawing of a perspective projection
  • FIG. 5 is a schematic flow chart of the refinement step of step S304;
  • FIG. 6 is a schematic flow chart of an additional step of a second embodiment of the present invention.
  • FIG. 7 is a schematic diagram of functional modules of a navigation device based on augmented reality according to a third embodiment of the present invention.
  • FIG. 8 is a schematic diagram of functional modules of an augmented reality based navigation device in a live view navigation mode according to a fourth embodiment of the present invention.
  • FIG. 9 is a schematic diagram of a refinement function module of the illumination processing module 804;
  • FIG. 10 is a schematic diagram of functional modules of a navigation device based on augmented reality according to a fifth embodiment of the present invention.
  • FIG. 1 is a schematic flowchart diagram of a navigation method based on augmented reality according to a first embodiment of the present invention, including:
  • Step S101 acquiring an acceleration value sensed by the acceleration sensor and a magnetic value sensed by the magnetic sensor;
  • a graphics program interface (Open Graphics Library) is placed in the navigation device in advance, and the OpenGL coordinate system is used as a coordinate system built in the navigation device, as shown in FIG. 2, when the navigation device is perpendicular to the ground.
  • the origin O of the OpenGL coordinate system is located at the center of the navigation device screen, from the origin to the right is the X-axis positive half-axis, from the origin to the Y-axis positive half-axis, from the origin perpendicular to the screen to the Z-axis Positive half shaft.
  • an acceleration sensor and a magnetic sensor are preset in the navigation device, and the initial state of the acceleration sensor coordinate system and the initial state of the magnetic sensor coordinate system are consistent with the initial state of the OpenGL coordinate system, when the navigation device is placed horizontally (parallel to the ground)
  • the acceleration sensor obtains the acceleration value as the gravity acceleration vector
  • the direction is perpendicular to the ground
  • the magnetic value obtained by the magnetic sensor does not consider the influence of the environment and the magnetic declination phenomenon, and the direction of the magnetic field is north-south orientation.
  • Step S102 determining a pitch angle and a roll angle based on the acceleration value and the magnetic force value, wherein the pitch angle is an angle at which the navigation device swings back and forth, and the roll angle is an angle at which the navigation device swings left and right;
  • the pitch angle is an angle at which the navigation device swings back and forth
  • the roll angle is an angle at which the navigation device swings left and right.
  • Step S103 If the pitch angle and the current navigation mode satisfy the first preset condition, or the roll angle and the current navigation mode satisfy the second preset condition, the navigation mode is switched.
  • the first preset condition is that the pitch angle is greater than P and/or the pitch angle is less than negative P
  • the second preset condition is that the roll angle is greater than Q and/or the pitch angle is less than negative Q
  • the magnitudes of P and Q are They can be equal or not equal, and can be adjusted according to actual conditions.
  • the navigation mode switching can be divided into the following cases:
  • the two-dimensional navigation mode is switched to the real-time navigation mode
  • the live view navigation mode is switched to the two-dimensional navigation mode
  • the two-dimensional navigation mode is switched to the live view navigation mode
  • the live view mode is switched to the two-dimensional navigation mode.
  • the embodiment of the present invention obtains an acceleration value sensed by the acceleration sensor and a magnetic value sensed by the magnetic sensor, and determines a pitch angle and a roll angle based on the acceleration value and the magnetic value.
  • the pitch angle is the angle at which the navigation device swings back and forth.
  • the roll angle is the angle at which the navigation device swings left and right. Based on the operation of the navigation device by the user, the angle of the pitch angle or the roll angle is adjusted, and if the pitch angle and the current navigation mode satisfy the first If the preset condition, or the rollover angle and the current navigation mode satisfy the second preset condition, the navigation mode is switched, and the user does not need to manually switch to improve the user experience.
  • FIG. 3 is a schematic flowchart diagram of a navigation method based on augmented reality according to a second embodiment of the present invention, including:
  • Step S301 in the real-view navigation mode, performing a coordinate transformation operation on the preset vertex data by using a vertex shader to obtain a cropping coordinate;
  • the preset vertex data is the vertex coordinate data of each face of the navigation arrow, and can be adjusted according to actual conditions.
  • the preset vertex data is input to the vertex shader in the manner of object space coordinates, and the vertex shader transforms the input object space coordinates into world coordinates by using the model matrix, and then uses the view matrix to convert the world coordinates.
  • the vertex shader is a programmable processing unit that performs operations such as vertex transformation and texture coordinate transformation. For each vertex data in the preset vertex data, a vertex shader is executed once.
  • Step S302 performing perspective division on the cropped coordinates to obtain normalized device coordinates
  • the clipping coordinates (X c , Y c , Z c ) are divided by W c to obtain the normalized device coordinates. After the perspective segmentation, the values of the normalized coordinates are all sitting in [-1, 1]. .
  • the size of W c is set in advance, and can be adjusted according to actual conditions.
  • Step S303 performing a viewport transformation on the normalized device coordinates to obtain window coordinates
  • the viewport is a two-dimensional rectangular window area, which is the final display of the OpenGL for Embedded Systems (OpenGL ES) subset after the rendering operation, and the viewport transformation can control the navigation arrow display.
  • OpenGL ES OpenGL for Embedded Systems
  • the window coordinates are calculated according to the following conversion formula, including:
  • (x w , y w , z w ) represents the window coordinates
  • w represents the width of the window
  • h represents the height of the window
  • (x d , y d , z d ) represents the normalized device coordinates
  • o x x d + w /2
  • o y y d +h/2
  • n and f represent the endpoint values of the mapping range of the navigation arrow on the z-axis.
  • the navigation arrow drawn in the embodiment of the present invention is a 3D object.
  • a perspective projection is needed, and a perspective projection can create a sense of distance, and it implements a mapping relationship between OpenGL coordinates and window coordinates, and OpenGL is a three-dimensional coordinate.
  • the screen of the navigation device is a two-dimensional coordinate. Since the navigation arrow needs to be displayed in the window, the portion outside the window needs to be cropped off, that is, the perspective projection processing is performed, and the mapping process of the perspective projection is described below. As shown in FIG.
  • the perspective projection is completed in a perspective projection model including a camera or an eye, a view cone, and a viewing angle coordinate system, the view cone is a quadrangular pyramid, and the camera or eye is at the origin A of the OpenGL coordinate.
  • the frustum is truncated by the front and rear planes to form a flat vertebral body.
  • the section plane near the A(0,0,0) point is called the near-cut plane, away from A(0).
  • the section plane of the point 0,0) is called the far-cut plane.
  • the near-cut plane represents the range of the plane drawing. It has four sides up, down, left and right, and the upper and lower sides represent the drawing range of the y-axis.
  • the perspective segmentation process in step S302 determines that the drawing range of the y-axis is [-1, 1], the left and right represent the drawing range of the x-axis, and the drawing range of the x-axis is determined according to the perspective segmentation process in step S302. , 1], the final part is to determine the drawing range of the z-axis, as shown in Figure 4, where n is the distance from the camera or eye to the plane, and f is the distance from the camera or eye to the farthest visible view. Therefore, the z-axis mapping range is [-n,-f].
  • mapping range of the x-axis, y-axis, and z-axis has been determined, and the drawn navigation arrows will fall within the flat-headed vertebrae (near
  • the perspective projection is mathematically described by a projection matrix, as shown below:
  • ratio represents the height ratio of the screen of the navigation device
  • f represents the distance of the camera or eye to the farthest visible view
  • n represents the distance from the camera or eye to the plane
  • a represents the angle of rotation
  • an appropriate rotation angle can be selected, and the drawn navigation arrow is at an appropriate angle in the screen of the navigation device.
  • Step S304 performing preset illumination processing on the window coordinates to obtain a navigation arrow
  • FIG. 5 is a schematic flowchart of the refinement step of step S304, including:
  • Step S501 performing a rasterization operation on the window coordinates to obtain a plurality of two-dimensional segments
  • the OpenGL window coordinates perform a rasterization operation, and each point, line, and triangle can be decomposed into a large number of two-dimensional segments, and the two-dimensional segments represent pixels that can be drawn on the screen of the navigation device.
  • Each 2D clip contains a single solid color.
  • Step S502 processing a plurality of two-dimensional segments by using a fragment shader, and drawing the processed two-dimensional segments on the screen to obtain a navigation arrow.
  • a segment shader is used to process a plurality of two-dimensional segments, and each of the processed two-dimensional segments has four components, wherein red, green, Yellow is used to represent color and the last component is used to represent transparency.
  • Step S305 superimposing the navigation arrow with the current real environment captured by the camera, and displaying it on the screen for real-time navigation.
  • the coordinate transformation operation is performed on the preset vertex data by using the vertex shader to obtain the cropped coordinates, and the cropped coordinates are perspective-divided to obtain the normalized device coordinates.
  • the viewport transformation is performed on the normalized device coordinates, the window coordinates are obtained, and the preset illumination processing is performed on the window coordinates to obtain a navigation arrow, and the navigation arrow is superimposed on the current real environment captured by the camera, thereby performing real-world navigation.
  • FIG. 6 A schematic flowchart of an additional step of the second embodiment of the present invention includes:
  • Step S601 Obtain the latitude and longitude of the current location and the latitude and longitude of the destination location in real time;
  • Step S602 obtaining a real-time deflection angle based on the latitude and longitude of the current position and the latitude and longitude of the destination position, wherein the deflection angle is an angle between the navigation device and the north direction of the magnetic field;
  • N represents the north direction of the magnetic field
  • the deflection angle is the angle ⁇ .
  • Step S603 calculating a rotation angle of the navigation arrow by using the deflection angle, and updating the guidance direction of the navigation arrow based on the rotation angle.
  • the rotation matrix R is input to the api built in the navigation device, and the elevation angle and the roll angle can be obtained.
  • the navigation device When the user navigates, the navigation device is moved, and the azimuth Azimuth changes accordingly. Therefore, the angle ⁇ also changes, and the direction and rotation angle of the navigation arrow can be updated according to the real-time variation of the angle ⁇ , so that the guidance direction can be updated in real time.
  • the latitude and longitude of the current location and the latitude and longitude of the destination location are obtained in real time according to the prior art, and the real-time deflection angle is obtained based on the latitude and longitude of the current location and the latitude and longitude of the destination location.
  • the rotation angle of the navigation arrow is calculated by using the deflection angle, and the direction and the rotation angle of the navigation arrow are updated based on the rotation angle, so that the guidance direction can be updated in real time.
  • FIG. 7 is a schematic diagram of functional modules of a navigation device based on augmented reality according to a third embodiment of the present invention, including:
  • the first obtaining module 701 is configured to acquire an acceleration value sensed by the acceleration sensor and a magnetic value sensed by the magnetic sensor;
  • a graphics program interface (Open Graphics Library) is placed in the navigation device in advance, and the OpenGL coordinate system is used as a coordinate system built in the navigation device, as shown in FIG. 2, when the navigation device is perpendicular to the ground.
  • the origin O of the OpenGL coordinate system is located at the center of the navigation device screen, from the origin to the right is the X-axis positive half-axis, from the origin to the Y-axis positive half-axis, from the origin perpendicular to the screen to the Z-axis Positive half shaft.
  • an acceleration sensor and a magnetic sensor are preset in the navigation device, and the initial state of the acceleration sensor coordinate system and the initial state of the magnetic sensor coordinate system are consistent with the initial state of the OpenGL coordinate system, when the navigation device is placed horizontally (parallel to the ground)
  • the acceleration sensor obtains the acceleration value as the gravity acceleration vector
  • the direction is perpendicular to the ground
  • the magnetic value obtained by the magnetic sensor does not consider the influence of the environment and the magnetic declination phenomenon, and the direction of the magnetic field is north-south orientation.
  • the first determining module 702 is configured to determine a pitch angle and a roll angle based on the acceleration value and the magnetic force value, wherein the pitch angle is an angle of the navigation device swinging back and forth, and the roll angle is an angle of the left and right swing of the navigation device;
  • the pitch angle is an angle at which the navigation device swings back and forth
  • the roll angle is an angle at which the navigation device swings left and right.
  • the switching module 703 is configured to perform the navigation mode switching if the pitch angle and the current navigation mode satisfy the first preset condition, or the roll angle and the current navigation mode satisfy the second preset condition.
  • the first preset condition is that the pitch angle is greater than P and/or the pitch angle is less than negative P
  • the second preset condition is that the roll angle is greater than Q and/or the pitch angle is less than negative Q
  • the magnitudes of P and Q are
  • the switching module 703 can be adjusted according to actual conditions, and the navigation mode switching can be specifically divided into the following cases:
  • the two-dimensional navigation mode is switched to the real-time navigation mode
  • the live view navigation mode is switched to the two-dimensional navigation mode
  • the two-dimensional navigation mode is switched to the live view navigation mode
  • the live view mode is switched to the two-dimensional navigation mode.
  • the first obtaining module 701 obtains the acceleration value sensed by the acceleration sensor and the magnetic value sensed by the magnetic sensor, and the first determining module 702 is based on the acceleration value and The magnetic value determines the pitch angle and the roll angle, wherein the pitch angle is an angle at which the navigation device swings back and forth, and the roll angle is an angle at which the navigation device swings left and right, and the angle of the pitch angle or the roll angle is adjusted based on the user's operation on the navigation device. If the pitch angle and the current navigation mode satisfy the first preset condition, or the rollover angle and the current navigation mode satisfy the second preset condition, the switching module 703 performs the navigation mode switching, and does not require the user to manually switch, thereby improving the user experience.
  • FIG. 8 is a schematic diagram of functional modules of a navigation device based on augmented reality in a real-view navigation mode according to a fourth embodiment of the present invention, including:
  • a coordinate transformation module 801 configured to perform a coordinate transformation operation on the preset vertex data by using a vertex shader in the real-time navigation mode to obtain a cropping coordinate
  • the preset vertex data is the vertex coordinate data of each face of the navigation arrow, and can be adjusted according to actual conditions.
  • the coordinate transformation module 801 inputs the preset vertex data into the vertex shader in the manner of object space coordinates, and the vertex shader uses the model matrix to transform the input object space coordinates into world coordinates, and then the coordinate transformation.
  • the module 801 transforms the world coordinates into the glasses coordinates by using the view matrix, and the coordinate transformation module 801 converts the eye coordinates into the crop coordinates by using the projection matrix, that is, multiplying the three matrices of the model matrix, the view matrix, and the projection matrix to obtain the crop coordinates.
  • the vertex shader is a programmable processing unit that performs operations such as vertex transformation and texture coordinate transformation. For each vertex data in the preset vertex data, a vertex shader is executed.
  • a perspective segmentation module 802 configured to perform perspective segmentation on the cropped coordinates to obtain normalized device coordinates
  • the perspective segmentation module 802 obtains the normalized device coordinates by taking the crop coordinates (X c , Y c , Z c ) by W c , and after the perspective segmentation, the values of the normalized coordinates are all sitting [-1, 1].
  • the size of W c is set in advance, and can be adjusted according to actual conditions.
  • the viewport transformation module 803 is configured to perform viewport transformation on the normalized device coordinates to obtain window coordinates.
  • the viewport is a two-dimensional rectangular window area, which is the final display of the OpenGL for Embedded Systems (OpenGL ES) subset after the rendering operation, and the viewport transformation can control the navigation arrow display.
  • OpenGL ES OpenGL for Embedded Systems
  • the viewport transformation module 803 calculates the window coordinates according to the following conversion formula, including:
  • w represents the width of the window
  • h represents the height of the window
  • (x d , y d , z d ) represents the normalized device coordinates
  • o x x d +w/2
  • o y y d +h/2
  • n and f represent the endpoint values of the mapping range of the navigation arrow on the z-axis.
  • the navigation arrow drawn in the embodiment of the present invention is a 3D object.
  • a perspective projection is needed, and a perspective projection can create a sense of distance, and it implements a mapping relationship between OpenGL coordinates and window coordinates, and OpenGL is a three-dimensional coordinate.
  • the screen of the navigation device is a two-dimensional coordinate. Since the navigation arrow needs to be displayed in the window, the portion outside the window needs to be cropped off, that is, the perspective projection processing is performed, and the mapping process of the perspective projection is described below. As shown in FIG.
  • the perspective projection is completed in a perspective projection model including a camera or an eye, a view cone, and a viewing angle coordinate system, the view cone is a quadrangular pyramid, and the camera or eye is at the origin A of the OpenGL coordinate.
  • the frustum is truncated by the front and rear planes to form a flat vertebral body.
  • the section plane near the A(0,0,0) point is called the near-cut plane, away from A(0).
  • the section plane of the point 0,0) is called the far-cut plane.
  • the near-cut plane represents the range of the plane drawing. It has four sides up, down, left and right, and the upper and lower sides represent the drawing range of the y-axis.
  • the drawing range of the y-axis is determined as [-1, 1], the left and right represent the drawing range of the x-axis, and the drawing range of the x-axis is determined according to the perspective segmentation process in step S302. 1,1], the final part is to determine the drawing range of the z-axis, as shown in Figure 4, where n is the distance from the camera or eye to the plane, and f is the mapping of the camera or eye to the farthest visible.
  • the z-axis mapping range is [-n,-f], so far, the mapping range of the x-axis, y-axis, and z-axis has been determined, and the drawn navigation arrows will fall within the flat-headed vertebrae (
  • the perspective projection between the near-cut plane and the far-cut plane) can be mathematically described by a projection matrix as follows:
  • ratio represents the height ratio of the screen of the navigation device
  • f represents the distance of the camera or eye to the farthest visible view
  • n represents the distance from the camera or eye to the plane
  • a represents the angle of rotation
  • an appropriate rotation angle can be selected, and the drawn navigation arrow is at an appropriate angle in the screen of the navigation device.
  • the illumination processing module 804 is configured to perform preset illumination processing on the window coordinates to obtain a navigation arrow;
  • FIG. 9 is a schematic diagram of a refinement function module of the illumination processing module 804, including:
  • a rasterization unit 901 configured to perform a rasterization operation on the window coordinates to obtain a plurality of two-dimensional segments
  • the OpenGL window coordinates perform a rasterization operation
  • the rasterization unit 901 can decompose each point, line, and triangle into a large number of two-dimensional segments, and the two-dimensional segment representations can be drawn on the screen of the navigation device.
  • the upper pixels, each of the two-dimensional segments contain a single solid color.
  • the drawing unit 902 is configured to process the plurality of two-dimensional segments by using the segment shader, and draw the processed two-dimensional segments on the screen to obtain navigation arrows.
  • the rendering unit 902 processes a plurality of two-dimensional segments by using a segment shader, and each of the processed two-dimensional segments has four components, wherein red , green, yellow is used to represent color, and the last component is used to represent transparency.
  • the overlay module 805 is configured to superimpose the navigation arrow with the current real environment captured by the camera, and display it on the screen for real-time navigation.
  • the coordinate transformation module 801 performs a coordinate transformation operation on the preset vertex data by using the vertex shader to obtain the cropped coordinates, and the perspective segmentation module 802 pairs the cropping.
  • the coordinates are subjected to perspective segmentation to obtain normalized device coordinates, and the viewport transformation module 803 performs viewport transformation on the normalized device coordinates to obtain window coordinates.
  • the illumination processing module 804 performs preset illumination processing on the window coordinates to obtain a navigation arrow, and the overlay module 805 passes the navigation arrow. Superimposed with the current real environment captured by the camera for live view navigation.
  • FIG. 10 is a schematic diagram of functional modules of a navigation device based on augmented reality according to a fifth embodiment of the present invention, including:
  • a second obtaining module 1001 configured to acquire latitude and longitude of the current location and latitude and longitude of the destination location in real time;
  • the second determining module 1002 is configured to obtain a real-time deflection angle based on the latitude and longitude of the current position and the latitude and longitude of the destination position, wherein the deflection angle is an angle between the navigation device and the north direction of the magnetic field;
  • N represents the north direction of the magnetic field
  • the deflection angle is the angle ⁇ .
  • the updating module 1003 is configured to calculate a rotation angle of the navigation arrow by using the deflection angle, and update the guiding direction of the navigation arrow based on the rotation angle.
  • the rotation matrix R is input to the api built in the navigation device, and the elevation angle and the roll angle can be obtained, and the orientation can also be calculated.
  • the embodiment of the present invention acquires the latitude and longitude of the current location and the latitude and longitude of the destination location in real time by the second obtaining module 1001, and the second determining module 1002 is based on the latitude and longitude and the destination of the current location.
  • the latitude and longitude of the position obtains a real-time deflection angle
  • the update module 1003 calculates the rotation angle of the navigation arrow by using the deflection angle, and updates the direction and rotation angle of the navigation arrow based on the rotation angle, so that the guidance direction can be updated in real time.
  • the disclosed apparatus and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the modules is only a logical function division.
  • there may be another division manner for example, multiple modules or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or module, and may be electrical, mechanical or otherwise.
  • the modules described as separate components may or may not be physically separate.
  • the components displayed as modules may or may not be physical modules, that is, may be located in one place, or may be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist physically separately, or two or more modules may be integrated into one module.
  • the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
  • the integrated modules if implemented in the form of software functional modules and sold or used as separate products, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
  • a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Software Systems (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Processing Or Creating Images (AREA)
  • Navigation (AREA)

Abstract

本发明公开了一种基于增强现实的导航方法,该方法包括:获取加速度传感器感应到的加速度值及磁力传感器感应到的磁力值;基于所述加速度值及所述磁力值确定俯仰角和翻滚角,其中,所述俯仰角为导航装置前后摆动的角度,所述翻滚角为所述导航装置左右摆动的角度;若所述俯仰角及当前的导航模式满足第一预设条件,或所述翻滚角及当前的导航模式满足第二预设条件,则进行导航模式切换。本发明还公开了一种基于增强现实的导航装置,不需要用户手动进行切换,提高用户体验。

Description

基于增强现实的导航方法及装置 技术领域
本发明涉及导航技术领域,尤其涉及一种基于增强现实的导航方法及装置。
背景技术
目前人们普遍使用二维地图进行导航,但是随着科技的不断发展发展,人们对导航技术的要求也越来越高,虽然目前已经可以将二维地图和当前所处的三维场景相结合,以满足用户对导航的高需求,但是从二维地图到三维场景或者从三维场景到二维地图,需要用户手动进行切换,用户体验差,因此,现有技术中存在着从二维地图到三维场景或者从三维场景到二维地图,需要用户手动进行切换,用户体验差的问题。
发明内容
本发明的主要目的在于提供一种基于增强现实的导航方法及装置,旨在解决现有技术中存在的从二维地图到三维场景或者从三维场景到二维地图,需要用户手动进行切换,用户体验差的技术问题。
为实现上述目的,本发明第一方面提供一种基于增强现实的导航方法,所述方法包括:
获取加速度传感器感应到的加速度值及磁力传感器感应到的磁力值;
基于所述加速度值及所述磁力值确定俯仰角和翻滚角,其中,所述俯仰角为导航装置前后摆动的角度,所述翻滚角为所述导航装置左右摆动的角度;
若所述俯仰角及当前的导航模式满足第一预设条件,和/或所述翻滚角及当前的导航模式满足第二预设条件,则进行导航模式切换。
为实现上述目的,本发明第二方面提供一种基于增强现实的导航装置,所述装置包括:
第一获取模块,用于获取加速度传感器感应到的加速度值及磁力传感器感应到的磁力值;
第一确定模块,用于基于所述加速度值及所述磁力值确定俯仰角和翻滚角,其中,所述俯仰角为导航装置前后摆动的角度,所述翻滚角为所述导航装置左右摆动的角度;
切换模块,用于若所述俯仰角及当前的导航模式满足第一预设条件,和/或所述翻滚角及当前的导航模式满足第二预设条件,则进行导航模式切换。
本发明提供一种基于增强现实的导航方法,与现有技术相比,本发明实施例获取加速度传感器感应到的加速度值及磁力传感器感应到的磁力值,基于所述加速度值及所述磁力值确定俯仰角和翻滚角,其中,所述俯仰角为导航装置前后摆动的角度,所述翻滚角为所述导航装置左右摆动的角度,基于用户对导航装置的操作,从而调整俯仰角或所述翻滚角的角度大小,若所述俯仰角及当前的导航模式满足第一预设条件,或所述翻滚角及当前的导航模式满足第二预设条件,则进行导航模式切换,不需要用户手动进行切换,提高用户体验。
附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本发明第一实施例提供的一种基于增强现实的导航方法的流程示意图;
图2示出了OpenGL坐标系的初始状态;
图3为本发明第二实施例提供的一种基于增强现实的导航方法的流程示意图;
图4示出了透视投影的作图范围的示意图;
图5为步骤S304的细化步骤的流程示意图;
图6为本发明第二实施例的追加步骤的流程示意图;
图7为本发明第三实施例提供的一种基于增强现实的导航装置的功能模块示意图;
图8为本发明第四实施例提供的在实景导航模式下的一种基于增强现实的导航装置的功能模块示意图;
图9为光照处理模块804的细化功能模块示意图;
图10为本发明第五实施例提供的一种基于增强现实的导航装置的功能模块示意图。
具体实施方式
为使得本发明的发明目的、特征、优点能够更加的明显和易懂,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而非全部实施例。基于本发明中的实施例,本领域技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
为了说明本发明所述的技术方案,下面通过具体实施例来进行说明。
请参阅图1,图1为本发明第一实施例提供的一种基于增强现实的导航方法的流程示意图,包括:
步骤S101、获取加速度传感器感应到的加速度值及磁力传感器感应到的磁力值;
在本发明实施例中,预先将图形程序接口(Open Graphics Library,)置于 导航装置中,将OpenGL坐标系作为导航装置内置的坐标系,如图2所示,当导航装置垂直于地面时,在初始状态下,OpenGL坐标系的原点O位于导航装置屏幕的中心,从原点向右为X轴正半轴,从原点向上为Y轴正半轴,从原点垂直于屏幕向外是Z轴的正半轴。此外,在导航装置中预置了加速度传感器和磁力传感器,加速度传感器坐标系的初始状态和磁力传感器坐标系的初始状态与OpenGL坐标系的初始状态是一致的,当导航装置水平放置(与地面平行)时,加速度传感器获取的加速度值为重力加速度向量,方向垂直指向地面,磁力传感器获取的磁力值不考虑环境的影响和磁偏角现象,磁场方向为南北朝向。
其中,获取到的加速度值为向量A,A=(A x,A y,A z),获取到的磁力值为向量E,E=(E x,E y,E z)。
步骤S102、基于加速度值及磁力值确定俯仰角和翻滚角,其中,俯仰角为导航装置前后摆动的角度,翻滚角为导航装置左右摆动的角度;
在本发明实施例中,将向量A和向量E做叉乘运算,得到向量H,H=E×A,之后对向量A做逆运算,得到向量a=(a x,a y,a z) (A xinvA,A yinvA,A zinvA),其中,
Figure PCTCN2018080764-appb-000001
对向量H做逆运算,得到向量h=(h x,h y,h z)=(H xinvH,H yinvH,H zinvH),其中,
Figure PCTCN2018080764-appb-000002
然后将向量a和向量h做叉乘运算,得到向量M,M=a×h,求得向量M=(M x,M yxM z)=(a yh z-a zh y,a zh x-a xh z,a xh y-a yh x),最后求出旋转矩阵R,
Figure PCTCN2018080764-appb-000003
将旋转矩阵R输入至导航装置中内置的应用程序的调用接口(Application Programming Interface,api),就可以求出俯仰角和翻滚角。
其中,俯仰角为导航装置前后摆动的角度,翻滚角为导航装置左右摆动的角度。
步骤S103、若俯仰角及当前的导航模式满足第一预设条件,或翻滚角及当前的导航模式满足第二预设条件,则进行导航模式切换。
在本发明实施例中,第一预设条件为俯仰角大于P和/或俯仰角小于负P, 第二预设条件为翻滚角大于Q和/或俯仰角小于负Q,P和Q的大小可以相等,也可以不相等,可根据实际情况进行调整,导航模式切换具体可分为以下情况:
1、若俯仰角大于P和/或俯仰角小于负P,并且当前的导航模式为二维导航模式,则将二维导航模式切换成实景导航模式;
2、若俯仰角大于P和/或俯仰角小于负P,并且当前的导航模式为实景导航模式,则不进行导航模式切换;
3、若俯仰角小于P且俯仰角大于负P,并且当前的导航模式为二维导航模式,则不进行导航模式切换;
4、若俯仰角小于P且俯仰角大于负P,并且当前的导航模式为二维导航模式,则将实景导航模式切换成二维导航模式;
5、若翻滚角大于Q和/或翻滚角小于负Q,并且当前的导航模式为二维导航模式,则将二维导航模式切换成实景导航模式;
6、若翻滚角大于Q和/或翻滚角小于负Q,并且当前的导航模式为实景导航模式,则不进行导航模式切换;
7、若翻滚角小于Q且翻滚角大于负Q,并且当前的导航模式为二维导航模式,则不进行导航模式切换;
8、若翻滚角小于Q且翻滚角大于负Q,并且当前的导航模式为二维导航模式,则将实景导航模式切换成二维导航模式。
在本发明实施例中,与现有技术相比,本发明实施例获取加速度传感器感应到的加速度值及磁力传感器感应到的磁力值,基于加速度值及磁力值确定俯仰角和翻滚角,其中,俯仰角为导航装置前后摆动的角度,翻滚角为导航装置左右摆动的角度,基于用户对导航装置的操作,从而调整俯仰角或翻滚角的角度大小,若俯仰角及当前的导航模式满足第一预设条件,或翻滚角及当前的导航模式满足第二预设条件,则进行导航模式切换,不需要用户手动进行切换,提高用户体验。
上述内容是对导航模式切换方法的描述,所述导航模式包括二维导航模式和实景导航模式,本发明实施例提供的一种基于增强现实的导航方法除了包括导航模式切换方法,还包括实景导航模式的具体实现过程,请参阅图3,图3为本发明第二实施例提供的一种基于增强现实的导航方法的流程示意图,包括:
步骤S301、在实景导航模式下,利用顶点着色器对预设的顶点数据执行坐标变换操作,得到裁剪坐标;
在本发明实施例中,预设的顶点数据就是导航箭头的每一个面的顶点坐标数据,可根据实际情况进行调整。
其中,在实景导航模式下,将预设的顶点数据以物体空间坐标的方式输入至顶点着色器,顶点着色器利用模型矩阵将输入的物体空间坐标变换成世界坐标,之后利用视图矩阵将世界坐标变换成眼镜坐标,再利用投影矩阵将眼睛坐标变换成裁剪坐标,即将模型矩阵、视图矩阵、投影矩阵这3个矩阵相乘就可以得到裁剪坐标。
其中,顶点着色器是一个可编程的处理单元,执行顶点变换、纹理坐标变换等操作,对于预设的顶点数据中的每一个顶点数据,都要执行一次顶点着色器。
步骤S302、对裁剪坐标进行透视分割得到规范化设备坐标;
在本发明实施例中,将裁剪坐标(X c,Y c,Z c)除以W c即可得到规范化设备坐标,经过透视分割后,规范化坐标的值都坐在[-1,1]中。
其中,预先设置W c的大小,可根据实际情况进行调整。
步骤S303、对规范化设备坐标进行视口变换,得到窗口坐标;
在本发明实施例中,视口是一个二维矩形窗口区域,是OpenGL三维图形API的子集(OpenGL for Embedded Systems,OpenGL ES)渲染操作后最终显示的地方,视口变换可以操控导航箭头显示在导航装置的屏幕的什么地方、 以什么形式进行显示,如箭头是否拉伸或者压缩等,以及调节显示的分辨率。
其中,按照如下转换公式计算得到窗口坐标,包括:
Figure PCTCN2018080764-appb-000004
其中,
Figure PCTCN2018080764-appb-000005
即为(x w,y w,z w)表示窗口坐标,w表示窗口的宽度,h表示窗口的高度,(x d,y d,z d)表示规范化设备坐标,o x=x d+w/2,o y=y d+h/2,n和f表示导航箭头在z轴的作图范围的端点值。
其中,本发明实施例中绘制的导航箭头为3D物体,要绘制3D物体,就需要进行透视投影,透视投影可以创造距离感,并且它实现了OpenGL坐标到窗口坐标的映射关系,OpenGL为三维坐标,而导航装置的屏幕为二维坐标,由于导航箭头需要在窗口中显示,因此在这个窗口之外的部分需要裁剪掉,即进行透视投影处理,下面对透视投影的映射过程进行说明,如图4所示,透视投影是在透视投影模型中完成的,透视投影模型包括摄像机或眼睛、视锥体、视角坐标系,该视锥体为四棱锥体,摄像机或眼睛在OpenGL坐标的原点A(0,0,0)处,该视锥体被前后两个平面截断,形成一个平头椎体,靠近A(0,0,0)点的截平面被称为近切平面,远离A(0,0,0)点的截平面被称为远切平面,首先确定作图范围,近切平面表示平面作图的范围,它有上下左右四条边,上下代表y轴的作图范围,根据步骤S302中的透视分割处理确定y轴的作图范围为[-1,1],左右代表x轴的作图范围,根据步骤S302中的透视分割处理确定x轴的作图范围为[-1,1],最后要确定的是z轴的作图范围,如图4所示,n表示的是摄像机或眼睛到平面作图的距离,f表示摄像机或眼睛到最远可见处的作图距离,因此,z轴的作图范围就是[-n,-f],至此,已经确定好x轴、y轴、z轴的作图范围,所画的导航箭头会落在平头椎体内(近切平面与远切平面之间的区域),透视投影在数学上可以用一个投影矩阵来描述,如下所示:
Figure PCTCN2018080764-appb-000006
其中,a表示摄像头的焦距,ratio代表导航装置的屏幕的高度比,f表示摄像机或眼睛到最远可见处的作图距离,n表示的是摄像机或眼睛到平面作图的距离,使用透视投影的主要目的是让导航箭头放置在视锥体内,而如果要让导航箭头处于一个很好的角度,就需要对导航箭头进行旋转,根据OpenGL坐标,就可以绕x轴、y轴、z轴这三个轴旋转,本发明实施例需要绕z轴旋转,下面是绕z轴旋转所要用到的矩阵:
Figure PCTCN2018080764-appb-000007
其中,a表示旋转角度,经过旋转处理,可以选择出一个适当的旋转角,所绘制的导航箭头就会在导航装置的屏幕中处于一个合适的角度。
步骤S304、对窗口坐标执行预置光照处理得到导航箭头;
进一步的,请参阅图5,图5为步骤S304的细化步骤的流程示意图,包括:
步骤S501、对窗口坐标执行光栅化操作,得到若干二维片段;
在本发明实施例中,OpenGL窗口坐标执行光栅化操作,可以将每个点、直线及三角形分解成大量的二维片段,且这些二维片段表示可被绘制在导航装置的屏幕上的像素,每一个二维片段都包含单一的纯色。
步骤S502、利用片段着色器对若干二维片段进行处理,并将处理后的若干二维片段绘制于屏幕上,得到导航箭头。
在本发明实施例中,为了将纯色的二维片段附上颜色,利用片段着色器对若干二维片段进行处理,处理后的每一个二维片段,都有4个分量,其中红色、绿色、黄色用来表示颜色,最后一个分量用于表示透明度。
步骤S305、将导航箭头与摄像头摄取到的当前的真实环境进行叠加,并显示于屏幕上,以进行实景导航。
在本发明实施例中,与现有技术相比,在实景导航模式下,通过利用顶点着色器对预设的顶点数据执行坐标变换操作,得到裁剪坐标,对裁剪坐标进行透视分割得到规范化设备坐标,对规范化设备坐标进行视口变换,得到窗口坐标,对窗口坐标执行预置光照处理得到导航箭头,通过将导航箭头与摄像头摄取到的当前的真实环境进行叠加,从而进行实景导航。
其中,在将导航箭头与摄像头摄取到的当前的真实环境进行叠加,并显示于屏幕上,以进行实景导航的步骤之后,还需要实时更新导航箭头的指引方向,请参阅图6,图6为本发明第二实施例的追加步骤的流程示意图,包括:
步骤S601、实时获取当前位置的经纬度和目的地位置的经纬度;
步骤S602、基于当前位置的经纬度和目的地位置的经纬度得到实时的偏向角,其中,偏向角为导航装置与磁场正北方向的夹角;
在本发明实施例中,如图7所示,N表示磁场正北方向,如图所示,偏向角为角θ。
步骤S603、利用偏向角计算出导航箭头的旋转角度,且基于旋转角度更新导航箭头的指引方向。
在本发明实施例中,根据本发明第一实施例中的步骤S102求得的旋转矩阵R,将旋转矩阵R输入至导航装置中内置的api,除了可以求出俯仰角和翻滚角,还可以计算出方位角Azimuth,根据方位角Azimuth和偏向角θ可以计算出角α,α=[(θ-Azimuth)%360+360]%360,然后,根据方位角Azimuth、偏向角θ及角α可以得到角β,β=(180-α-Azimuth+360)%360,角β表示导航箭头的初始旋转角度,当用户在导航的过程中,导航装置会被移动,方位角Azimuth会随之变化,因此,角β也会随之变化,根据角β的实时变化情况可以更新导航箭头的方向和旋转角度,从而可以实时更新指引方向。
在本发明实施例中,与现有技术相比,本发明实施例通过实时获取当前位置的经纬度和目的地位置的经纬度,基于当前位置的经纬度和目的地位置的经纬度得到实时的偏向角,通过利用偏向角计算出导航箭头的旋转角度,且基于旋转角度更新导航箭头的方向和旋转角度,从而可以实时更新指引方向。
请参阅图7,图7为本发明第三实施例提供的一种基于增强现实的导航装置的功能模块示意图,包括:
第一获取模块701,用于获取加速度传感器感应到的加速度值及磁力传感器感应到的磁力值;
在本发明实施例中,预先将图形程序接口(Open Graphics Library,)置于导航装置中,将OpenGL坐标系作为导航装置内置的坐标系,如图2所示,当导航装置垂直于地面时,在初始状态下,OpenGL坐标系的原点O位于导航装置屏幕的中心,从原点向右为X轴正半轴,从原点向上为Y轴正半轴,从原点垂直于屏幕向外是Z轴的正半轴。此外,在导航装置中预置了加速度传感器和磁力传感器,加速度传感器坐标系的初始状态和磁力传感器坐标系的初始状态与OpenGL坐标系的初始状态是一致的,当导航装置水平放置(与地面平行)时,加速度传感器获取的加速度值为重力加速度向量,方向垂直指向地面,磁力传感器获取的磁力值不考虑环境的影响和磁偏角现象,磁场方向为南北朝向。
其中,获取到的加速度值为向量A,A=(A x,A y,A z),获取到的磁力值为向量E,E=(E x,E y,E z)。
第一确定模块702,用于基于加速度值及磁力值确定俯仰角和翻滚角,其中,俯仰角为导航装置前后摆动的角度,翻滚角为导航装置左右摆动的角度;
在本发明实施例中,第一确定模块702将向量A和向量E做叉乘运算,得到向量H,H=E×A,之后第一确定模块702对向量A做逆运算,得到向量a=(a x,a y,a z)=(A xinvA,A yinvA,A zinvA),其中,
Figure PCTCN2018080764-appb-000008
第一确定模块702对向量H做逆运算,得到向量h=(h x,h y,h z)=(H xinvH,H yinvH,H zinvH),其中,
Figure PCTCN2018080764-appb-000009
然后将向量a和向量h做叉乘运算,得到向量M,M=a×h,求得向量M=(M x,M y,M z)=(a yh z-a zh y,a zh x-a xh z,a xh y-a yh x),最后求出旋转矩阵R,
Figure PCTCN2018080764-appb-000010
第一确定模块702将旋转矩阵R输入至导航装置中内置的应用程序的调用接口(Application Programming Interface,api),就可以求出俯仰角和翻滚角。
其中,俯仰角为导航装置前后摆动的角度,翻滚角为导航装置左右摆动的角度。
切换模块703,用于若俯仰角及当前的导航模式满足第一预设条件,或翻滚角及当前的导航模式满足第二预设条件,则进行导航模式切换。
在本发明实施例中,第一预设条件为俯仰角大于P和/或俯仰角小于负P,第二预设条件为翻滚角大于Q和/或俯仰角小于负Q,P和Q的大小可以相等,也可以不相等,切换模块703可根据实际情况进行调整,导航模式切换具体可分为以下情况:
1、若俯仰角大于P和/或俯仰角小于负P,并且当前的导航模式为二维导航模式,则将二维导航模式切换成实景导航模式;
2、若俯仰角大于P和/或俯仰角小于负P,并且当前的导航模式为实景导航模式,则不进行导航模式切换;
3、若俯仰角小于P且俯仰角大于负P,并且当前的导航模式为二维导航模式,则不进行导航模式切换;
4、若俯仰角小于P且俯仰角大于负P,并且当前的导航模式为二维导航模式,则将实景导航模式切换成二维导航模式;
5、若翻滚角大于Q和/或翻滚角小于负Q,并且当前的导航模式为二维导航模式,则将二维导航模式切换成实景导航模式;
6、若翻滚角大于Q和/或翻滚角小于负Q,并且当前的导航模式为实景导航模式,则不进行导航模式切换;
7、若翻滚角小于Q且翻滚角大于负Q,并且当前的导航模式为二维导航模式,则不进行导航模式切换;
8、若翻滚角小于Q且翻滚角大于负Q,并且当前的导航模式为二维导航模式,则将实景导航模式切换成二维导航模式。
在本发明实施例中,与现有技术相比,本发明实施例通过第一获取模块701获取加速度传感器感应到的加速度值及磁力传感器感应到的磁力值,第一确定模块702基于加速度值及磁力值确定俯仰角和翻滚角,其中,俯仰角为导航装置前后摆动的角度,翻滚角为导航装置左右摆动的角度,基于用户对导航装置的操作,从而调整俯仰角或翻滚角的角度大小,若俯仰角及当前的导航模式满足第一预设条件,或翻滚角及当前的导航模式满足第二预设条件,切换模块703则进行导航模式切换,不需要用户手动进行切换,提高用户体验。
请参阅图8,图8为本发明第四实施例提供的在实景导航模式下的一种基于增强现实的导航装置的功能模块示意图,包括:
坐标变换模块801,用于在实景导航模式下,利用顶点着色器对预设的顶点数据执行坐标变换操作,得到裁剪坐标;
在本发明实施例中,预设的顶点数据就是导航箭头的每一个面的顶点坐标数据,可根据实际情况进行调整。
其中,在实景导航模式下,坐标变换模块801将预设的顶点数据以物体空间坐标的方式输入至顶点着色器,顶点着色器利用模型矩阵将输入的物体空间坐标变换成世界坐标,之后坐标变换模块801利用视图矩阵将世界坐标变换成眼镜坐标,坐标变换模块801再利用投影矩阵将眼睛坐标变换成裁剪坐标,即将模型矩阵、视图矩阵、投影矩阵这3个矩阵相乘就可以得到裁剪坐标。
其中,顶点着色器是一个可编程的处理单元,执行顶点变换、纹理坐标变换等操作,对于预设的顶点数据中的每一个顶点数据,都要执行一次顶点着色 器。
透视分割模块802,用于对裁剪坐标进行透视分割得到规范化设备坐标;
在本发明实施例中,透视分割模块802将裁剪坐标(X c,Y c,Z c)处以W c即可得到规范化设备坐标,经过透视分割后,规范化坐标的值都坐在[-1,1]中。
其中,预先设置W c的大小,可根据实际情况进行调整。
视口变换模块803,用于对规范化设备坐标进行视口变换,得到窗口坐标;
在本发明实施例中,视口是一个二维矩形窗口区域,是OpenGL三维图形API的子集(OpenGL for Embedded Systems,OpenGL ES)渲染操作后最终显示的地方,视口变换可以操控导航箭头显示在导航装置的屏幕的什么地方、以什么形式进行显示,如箭头是否拉伸或者压缩等,以及调节显示的分辨率。
其中,视口变换模块803按照如下转换公式计算得到窗口坐标,包括:
Figure PCTCN2018080764-appb-000011
其中,
Figure PCTCN2018080764-appb-000012
表示窗口坐标,w表示窗口的宽度,h表示窗口的高度,(x d,y d,z d)表示规范化设备坐标,o x=x d+w/2,o y=y d+h/2,n和f表示导航箭头在z轴的作图范围的端点值。,
其中,本发明实施例中绘制的导航箭头为3D物体,要绘制3D物体,就需要进行透视投影,透视投影可以创造距离感,并且它实现了OpenGL坐标到窗口坐标的映射关系,OpenGL为三维坐标,而导航装置的屏幕为二维坐标,由于导航箭头需要在窗口中显示,因此在这个窗口之外的部分需要裁剪掉,即进行透视投影处理,下面对透视投影的映射过程进行说明,如图4所示,透视投影是在透视投影模型中完成的,透视投影模型包括摄像机或眼睛、视锥体、视角坐标系,该视锥体为四棱锥体,摄像机或眼睛在OpenGL坐标的原点A(0,0,0)处,该视锥体被前后两个平面截断,形成一个平头椎体,靠近A(0,0,0)点的截平面被称为近切平面,远离A(0,0,0)点的截平面被称为远切平面,首先确定作图范 围,近切平面表示平面作图的范围,它有上下左右四条边,上下代表y轴的作图范围,根据步骤S302中的透视分割处理确定y轴的作图范围为[-1,1],左右代表x轴的作图范围,根据步骤S302中的透视分割处理确定x轴的作图范围为[-1,1],最后要确定的是z轴的作图范围,如图4所示,n表示的是摄像机或眼睛到平面作图的距离,f表示摄像机或眼睛到最远可见处的作图距离,因此,z轴的作图范围就是[-n,-f],至此,已经确定好x轴、y轴、z轴的作图范围,所画的导航箭头会落在平头椎体内(近切平面与远切平面之间的区域),透视投影在数学上可以用一个投影矩阵来描述,如下所示:
Figure PCTCN2018080764-appb-000013
其中,a表示摄像头的焦距,ratio代表导航装置的屏幕的高度比,f表示摄像机或眼睛到最远可见处的作图距离,n表示的是摄像机或眼睛到平面作图的距离,使用透视投影的主要目的是让导航箭头放置在视锥体内,而如果要让导航箭头处于一个很好的角度,就需要对导航箭头进行旋转,根据OpenGL坐标,就可以绕x轴、y轴、z轴这三个轴旋转,本发明实施例需要绕z轴旋转,下面是绕z轴旋转所要用到的矩阵:
Figure PCTCN2018080764-appb-000014
其中,a表示旋转角度,经过旋转处理,可以选择出一个适当的旋转角,所绘制的导航箭头就会在导航装置的屏幕中处于一个合适的角度。
光照处理模块804,用于对窗口坐标执行预置光照处理得到导航箭头;
进一步的,请参阅图9,图9为光照处理模块804的细化功能模块示意图,包括:
光栅化单元901,用于对窗口坐标执行光栅化操作,得到若干二维片段;
在本发明实施例中,OpenGL窗口坐标执行光栅化操作,光栅化单元901可以将每个点、直线及三角形分解成大量的二维片段,且这些二维片段表示可被绘制在导航装置的屏幕上的像素,每一个二维片段都包含单一的纯色。
绘制单元902,用于利用片段着色器对若干二维片段进行处理,并将处理后的若干二维片段绘制于屏幕上,得到导航箭头。
在本发明实施例中,为了将纯色的二维片段附上颜色,绘制单元902利用片段着色器对若干二维片段进行处理,处理后的每一个二维片段,都有4个分量,其中红色、绿色、黄色用来表示颜色,最后一个分量用于表示透明度。
叠加模块805,用于将导航箭头与摄像头摄取到的当前的真实环境进行叠加,并显示于屏幕上,以进行实景导航。
在本发明实施例中,与现有技术相比,在实景导航模式下,坐标变换模块801通过利用顶点着色器对预设的顶点数据执行坐标变换操作,得到裁剪坐标,透视分割模块802对裁剪坐标进行透视分割得到规范化设备坐标,视口变换模块803对规范化设备坐标进行视口变换,得到窗口坐标,光照处理模块804对窗口坐标执行预置光照处理得到导航箭头,叠加模块805通过将导航箭头与摄像头摄取到的当前的真实环境进行叠加,从而进行实景导航。
请参阅图10,图10为本发明第五实施例提供的一种基于增强现实的导航装置的功能模块示意图,包括:
第二获取模块1001,用于实时获取当前位置的经纬度和目的地位置的经纬度;
第二确定模块1002,用于基于当前位置的经纬度和目的地位置的经纬度得到实时的偏向角,其中,偏向角为导航装置与磁场正北方向的夹角;
在本发明实施例中,如图7所示,N表示磁场正北方向,如图所示,偏向 角为角θ。
更新模块1003,用于利用偏向角计算出导航箭头的旋转角度,且基于旋转角度更新导航箭头的指引方向。
在本发明实施例中,根据本发明第四实施例中求得的旋转矩阵R,将旋转矩阵R输入至导航装置中内置的api,除了可以求出俯仰角和翻滚角,还可以计算出方位角Azimuth,根据方位角Azimuth和偏向角θ可以计算出角α,α=[(θ-Azimuth)%360+360]%360,然后,根据方位角Azimuth、偏向角θ及角α可以得到角β,β=(180-α-Azimuth+360)%360,角β表示导航箭头的初始旋转角度,当用户在导航的过程中,导航装置会被移动,方位角Azimuth会随之变化,因此,角β也会随之变化,根据角β的实时变化情况可以更新导航箭头的方向和旋转角度,从而可以实时更新指引方向。
在本发明实施例中,与现有技术相比,本发明实施例通过第二获取模块1001实时获取当前位置的经纬度和目的地位置的经纬度,第二确定模块1002基于当前位置的经纬度和目的地位置的经纬度得到实时的偏向角,更新模块1003通过利用偏向角计算出导航箭头的旋转角度,且基于旋转角度更新导航箭头的方向和旋转角度,从而可以实时更新指引方向。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或模块的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为 模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。
所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
需要说明的是,对于前述的各方法实施例,为了简便描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其它顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定都是本发明所必须的。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。
以上为对本发明所提供的一种基于增强现实的导航方法及装置的描述,对于本领域的技术人员,依据本发明实施例的思想,在具体实施方式及应用范围 上均会有改变之处,综上,本说明书内容不应理解为对本发明的限制。

Claims (10)

  1. 一种基于增强现实的导航方法,其特征在于,所述方法包括:
    获取加速度传感器感应到的加速度值及磁力传感器感应到的磁力值;
    基于所述加速度值及所述磁力值确定俯仰角和翻滚角,其中,所述俯仰角为导航装置前后摆动的角度,所述翻滚角为所述导航装置左右摆动的角度;
    若所述俯仰角及当前的导航模式满足第一预设条件,和/或所述翻滚角及当前的导航模式满足第二预设条件,则进行导航模式切换。
  2. 如权利要求1所述的方法,其特征在于,所述方法还包括:
    在所述实景导航模式下,利用顶点着色器对预设的顶点数据执行坐标变换操作,得到裁剪坐标;
    对所述裁剪坐标进行透视分割得到规范化设备坐标;
    对所述规范化设备坐标进行视口变换,得到窗口坐标;
    对所述窗口坐标执行预置光照处理得到导航箭头;
    将所述导航箭头与摄像头摄取到的当前的真实环境进行叠加,并显示于所述屏幕上,以进行实景导航。
  3. 如权利要求2所述的方法,其特征在于,所述对所述窗口坐标执行预置光照处理得到导航箭头的步骤包括:
    对所述窗口坐标执行光栅化操作,得到若干二维片段;
    利用片段着色器对所述若干二维片段进行处理,并将处理后的若干二维片段绘制于屏幕上,得到所述导航箭头。
  4. 如权利要求2或3所述的方法,其特征在于,所述对所述规范化设备坐标进行视口变换,得到窗口坐标的步骤包括:
    按照如下转换公式计算得到所述窗口坐标:
    Figure PCTCN2018080764-appb-100001
    其中,
    Figure PCTCN2018080764-appb-100002
    即为(x w,y w,z w)表示窗口坐标,w表示窗口的宽度,h表示窗口的高度,(x d,y d,z d)表示规范化设备坐标,o x=x d+w/2,o y=y d+h/2,n和f表示所述导航箭头在z轴的作图范围的端点值。
  5. 如权利要求2所述的方法,其特征在于,所述方法还包括:
    实时获取当前位置的经纬度和目的地位置的经纬度;
    基于所述当前位置的经纬度和所述目的地位置的经纬度得到实时的偏向角,其中,所述偏向角为所述导航装置与磁场正北方向的夹角;
    利用所述偏向角计算出所述导航箭头的旋转角度,且基于所述旋转角度更新所述导航箭头的指引方向。
  6. 一种基于增强现实的导航装置,其特征在于,所述装置包括:
    第一获取模块,用于获取加速度传感器感应到的加速度值及磁力传感器感应到的磁力值;
    第一确定模块,用于基于所述加速度值及所述磁力值确定俯仰角和翻滚角,其中,所述俯仰角为导航装置前后摆动的角度,所述翻滚角为所述导航装置左右摆动的角度;
    切换模块,用于若所述俯仰角及当前的导航模式满足第一预设条件,和/或所述翻滚角及当前的导航模式满足第二预设条件,则进行导航模式切换。
  7. 如权利要求6所述的装置,其特征在于,所述装置还包括:
    坐标变换模块,用于在所述实景导航模式下,利用顶点着色器对预设的顶点数据执行坐标变换操作,得到裁剪坐标;
    透视分割模块,用于对所述裁剪坐标进行透视分割得到规范化设备坐标;
    视口变换模块,用于对所述规范化设备坐标进行视口变换,得到窗口坐标;
    光照处理模块,用于对所述窗口坐标执行预置光照处理得到导航箭头;
    叠加模块,用于将所述导航箭头与摄像头摄取到的当前的真实环境进行叠加,并显示于所述屏幕上,以进行实景导航。
  8. 如权利要求7所述的装置,其特征在于,所述光照处理模块包括:
    光栅化单元,用于对所述窗口坐标执行光栅化操作,得到若干二维片段;
    绘制单元,用于利用片段着色器对所述若干二维片段进行处理,并将处理后的若干二维片段绘制于屏幕上,得到所述导航箭头。
  9. 如权利要求7或8所述的装置,其特征在于,所述视口变换模块具体用于:
    按照如下转换公式计算得到所述窗口坐标:
    Figure PCTCN2018080764-appb-100003
    其中,
    Figure PCTCN2018080764-appb-100004
    即为(x w,y w,z w)表示窗口坐标,w表示窗口的宽度,h表示窗口的高度,(x d,y d,z d)表示规范化设备坐标,o x=x d+w/2,o y=y d+h/2,n和f表示所述导航箭头在z轴的作图范围的端点值。
  10. 如权利要求7所述的装置,其特征在于,所述装置还包括:
    第二获取模块,用于实时获取当前位置的经纬度和目的地位置的经纬度;
    第二确定模块,用于基于所述当前位置的经纬度和所述目的地位置的经纬度得到实时的偏向角,其中,所述偏向角为所述导航装置与磁场正北方向的夹角;
    更新模块,用于利用所述偏向角计算出所述导航箭头的旋转角度,且基于所述旋转角度更新所述导航箭头的指引方向。
PCT/CN2018/080764 2017-04-10 2018-03-28 基于增强现实的导航方法及装置 WO2018188479A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710227610.9A CN107015654A (zh) 2017-04-10 2017-04-10 基于增强现实的导航方法及装置
CN201710227610.9 2017-04-10

Publications (1)

Publication Number Publication Date
WO2018188479A1 true WO2018188479A1 (zh) 2018-10-18

Family

ID=59445447

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/080764 WO2018188479A1 (zh) 2017-04-10 2018-03-28 基于增强现实的导航方法及装置

Country Status (2)

Country Link
CN (1) CN107015654A (zh)
WO (1) WO2018188479A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115355926A (zh) * 2022-10-19 2022-11-18 北京百度网讯科技有限公司 车辆导航的方法、装置、设备以及存储介质

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107015654A (zh) * 2017-04-10 2017-08-04 深圳大学 基于增强现实的导航方法及装置
CN108009124B (zh) * 2017-11-29 2021-03-26 天津聚飞创新科技有限公司 旋转矩阵计算方法及装置
CN108168555B (zh) * 2017-12-08 2021-05-07 李志新 基于坐标定位的操作指引方法和系统
CN111065891B (zh) * 2018-08-16 2023-11-14 北京嘀嘀无限科技发展有限公司 基于增强现实的室内导航系统
CN109059901B (zh) * 2018-09-06 2020-02-11 深圳大学 一种基于社交应用的ar导航方法、存储介质及移动终端
CN110440815A (zh) * 2019-08-16 2019-11-12 南京邮电大学 一种基于增强现实的导航方法
CN111524392B (zh) * 2020-04-22 2022-05-06 智慧航海(青岛)科技有限公司 一种辅助智能船舶远程驾驶的综合系统
CN112306344B (zh) * 2020-10-19 2023-11-28 武汉中科通达高新技术股份有限公司 一种数据处理方法及移动终端
CN115460320A (zh) * 2021-06-09 2022-12-09 阿里巴巴新加坡控股有限公司 导航方法、装置、计算机存储介质及计算机程序产品

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102523473A (zh) * 2011-12-01 2012-06-27 中兴通讯股份有限公司 一种三维界面显示装置、方法及终端
CN103090862A (zh) * 2013-01-18 2013-05-08 华为终端有限公司 一种终端设备及终端设备的导航模式切换方法
CN105741341A (zh) * 2016-01-27 2016-07-06 桂林长海发展有限责任公司 一种三维空间环境成像系统及方法
CN107015654A (zh) * 2017-04-10 2017-08-04 深圳大学 基于增强现实的导航方法及装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102523473A (zh) * 2011-12-01 2012-06-27 中兴通讯股份有限公司 一种三维界面显示装置、方法及终端
CN103090862A (zh) * 2013-01-18 2013-05-08 华为终端有限公司 一种终端设备及终端设备的导航模式切换方法
CN105741341A (zh) * 2016-01-27 2016-07-06 桂林长海发展有限责任公司 一种三维空间环境成像系统及方法
CN107015654A (zh) * 2017-04-10 2017-08-04 深圳大学 基于增强现实的导航方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115355926A (zh) * 2022-10-19 2022-11-18 北京百度网讯科技有限公司 车辆导航的方法、装置、设备以及存储介质
CN115355926B (zh) * 2022-10-19 2023-09-19 北京百度网讯科技有限公司 车辆导航的方法、装置、设备以及存储介质

Also Published As

Publication number Publication date
CN107015654A (zh) 2017-08-04

Similar Documents

Publication Publication Date Title
WO2018188479A1 (zh) 基于增强现实的导航方法及装置
EP3057066B1 (en) Generation of three-dimensional imagery from a two-dimensional image using a depth map
US10957011B2 (en) System and method of capturing and rendering a stereoscopic panorama using a depth buffer
US9626790B1 (en) View-dependent textures for interactive geographic information system
US9704282B1 (en) Texture blending between view-dependent texture and base texture in a geographic information system
US10659742B2 (en) Image generating apparatus and image display control apparatus
CA2550512A1 (en) 3d videogame system
JP2005339313A (ja) 画像提示方法及び装置
EP3655928B1 (en) Soft-occlusion for computer graphics rendering
US20190130599A1 (en) Systems and methods for determining when to provide eye contact from an avatar to a user viewing a virtual environment
WO2015196791A1 (zh) 双目三维图形渲染方法及相关系统
US11417060B2 (en) Stereoscopic rendering of virtual 3D objects
JP2024026151A (ja) 没入型ビデオコンテンツをフォービエイテッドメッシュを用いてレンダリングするための方法、システム、および媒体
CN110889384A (zh) 场景切换方法及装置、电子设备和存储介质
EP3665656B1 (en) Three-dimensional video processing
JP6719596B2 (ja) 画像生成装置、及び画像表示制御装置
JP6168597B2 (ja) 情報端末装置
WO2018201663A1 (zh) 一种立体图形显示的方法、装置及设备
US10275939B2 (en) Determining two-dimensional images using three-dimensional models
CN109949396A (zh) 一种渲染方法、装置、设备和介质
TWM630947U (zh) 立體影像播放裝置
CN110197524B (zh) 立体显示方法、设备、装置和计算机可读存储介质
JP2001222726A (ja) 画像処理方法および画像処理装置
US10453247B1 (en) Vertex shift for rendering 360 stereoscopic content
TWI812548B (zh) 生成並排三維影像的方法及電腦裝置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18784842

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 21.01.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18784842

Country of ref document: EP

Kind code of ref document: A1