CN113253843B - Indoor virtual roaming realization method and realization system based on panorama - Google Patents

Indoor virtual roaming realization method and realization system based on panorama Download PDF

Info

Publication number
CN113253843B
CN113253843B CN202110564925.9A CN202110564925A CN113253843B CN 113253843 B CN113253843 B CN 113253843B CN 202110564925 A CN202110564925 A CN 202110564925A CN 113253843 B CN113253843 B CN 113253843B
Authority
CN
China
Prior art keywords
virtual
mobile phone
user
display end
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110564925.9A
Other languages
Chinese (zh)
Other versions
CN113253843A (en
Inventor
马琳
温育杜
郭士增
谭学治
孟维晓
叶亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202110564925.9A priority Critical patent/CN113253843B/en
Publication of CN113253843A publication Critical patent/CN113253843A/en
Application granted granted Critical
Publication of CN113253843B publication Critical patent/CN113253843B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/012Head tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses an indoor virtual roaming realization method and an indoor virtual roaming realization system based on a panoramic image, and relates to an indoor virtual roaming realization method and an indoor virtual roaming realization system. The invention aims to solve the problem that in the process of constructing a three-dimensional model, the accurate correspondence between the model and the size of a scene is difficult to achieve, so that the virtual environment after map rendering is not realistic enough. The process is as follows: step one, constructing a three-dimensional virtual space model based on an image and binocular parallax principle; step two, constructing a three-dimensional virtual space model based on the step one, and realizing visual field transformation according to the rotation information of the head of the user at a fixed point; and thirdly, detecting the head state of the user by the gyroscope, and realizing the position switching function of the virtual viewpoint in a mode of combining the head state of the user with the Bluetooth controller. The present invention relates to the field of image processing.

Description

Indoor virtual roaming realization method and realization system based on panorama
Technical Field
The invention relates to the field of image processing, in particular to a method for realizing indoor virtual roaming by utilizing indoor panoramic images, panoramic image display modes, binocular parallax principles, android development and other technologies.
Background
In recent years, the computing power of computers has been rapidly improved, and virtual reality technology based on computer graphics image processing has also been rapidly developed. The virtual reality technology is to virtualize a three-dimensional world by using a computer, the world can enable a user to have an immersive feeling, the user can perform real-time information interaction with the virtual three-dimensional world, and the technology has the characteristics of perceptibility, interactivity, instantaneity and immersive property. The virtual reality technology can provide various rich sensory information, such as vision, hearing, touch sense, even smell sense, and the like, for users, and provides real feelings for the users, which are the special perceptibility characteristics of the virtual reality technology; interactivity refers to that a virtual reality system supports information interaction with a user and responds to the user demand; real-time refers to that the virtual reality technology can analyze the demands of users in real time according to the interactive communication process between the users and the virtual environment and provide corresponding responses; the immersive technology is the biggest technical bright point of virtual reality, and the virtual reality technology can be specifically analyzed according to external conditions to construct a more real environment, so that a user has the illusion of being personally on the scene.
The above technical characteristics possessed by the virtual technology make it widely applicable in various industrial fields. For example, in the mechanical design, a designer firstly designs a virtual mechanical product by using a virtual reality technology, and then collects information of virtual use of the product by a user by using interactivity and perceptibility of virtual reality so as to optimize the product; in the aspect of medical treatment, the virtual reality technology can provide a digital experiment of complex operation for medical research, and through a digital experiment mode, medical researchers can save the cost of the experiment and improve the safety of operation practice; in the education field, the virtual training base constructed by utilizing the virtual reality technology can randomly switch teaching scenes according to teaching requirements, a plurality of virtual operation platforms are provided for students, and the students can train repeatedly on the platforms, so that the teaching interestingness is improved, and the education cost is reduced.
In the development of virtual reality technology, how to construct a virtual three-dimensional space has been the difficulty of the technology and the focus of research. In a traditional virtual reality system, a three-dimensional model of a virtual scene is generally built by a computer, and then the virtual scene is built by post mapping, rendering and other technologies. There are a number of drawbacks to using this approach, such as the need for computers with very powerful computing and graphics image processing capabilities to display three-dimensional scenes in real time. In recent years, although the rapid development of computer science and technology has greatly improved the computing power of computers, these advances have not been able to meet the requirement of providing a good virtual reality experience. In addition, the computer is a complex process for building a three-dimensional geometric model of a scene, and because the scene object structure existing in reality is very complex, in the process of building the model, accurate correspondence between the model and the size of the scene is difficult, which causes that the virtual environment after rendering of the map is not lifelike.
With the development of image processing technology, a three-dimensional modeling technology based on panoramic images has also been developed. The panoramic image is an image capable of displaying environment information of ultra-large scenes and ultra-wide viewing angles, and the viewing angle of the image provided by the panoramic image is far greater than the viewing angle range of human eyes, so that more scene information can be displayed. Panoramic image-based virtual roaming techniques present scenes in the real world to users by way of a panoramic image in such a way that the user can observe a three-dimensional scene at a location that is similar to that in real life. In addition, in the panoramic image, the user can conduct 360-degree all-round browsing observation on the virtual environment, and the advantage is given to the user of strong immersion. In addition, by continuously photographing panoramic images at a plurality of viewpoints through a specified path in advance, roaming observation of a virtual scene on the specified path, that is, virtual roaming in a virtual environment can be realized.
The object seen by the eyes of a person in the real world is three-dimensional because a certain distance exists between the eyes of the person, and the existence of the distance enables the object seen by the left eye and the right eye of the person to have a certain parallax, and the images with the parallax are fused in the visual nerve center, so that a three-dimensional picture with three-dimensional sense and layering sense is generated. This phenomenon in which a stereoscopic object is observed due to the existence of left-right parallax is called binocular parallax principle.
Disclosure of Invention
The invention aims to solve the problem that in the process of constructing a three-dimensional model, the accurate correspondence between the model and the size of a scene is difficult to achieve, so that the virtual environment after rendering of a map is not realistic, and provides an indoor virtual roaming realization method and an indoor virtual roaming realization system based on a panoramic map.
The indoor virtual roaming implementation method based on the panoramic image comprises the following specific processes:
step one, constructing a three-dimensional virtual space model based on an image and binocular parallax principle;
step two, constructing a three-dimensional virtual space model based on the step one, and realizing visual field transformation according to the rotation information of the head of the user at a fixed point;
and thirdly, detecting the head state of the user by the gyroscope, and realizing the position switching function of the virtual viewpoint in a mode of combining the head state of the user with the Bluetooth controller.
Preferably, in the first step, a three-dimensional virtual space model is constructed based on the image and binocular parallax principle; the specific process is as follows:
shooting left and right views under each viewpoint by using a binocular camera;
step one, respectively transmitting left and right views shot by the step one-to-one binocular camera to an image display control, and respectively displaying the left and right views at a display end;
step one, embedding the mobile phone into VR glasses, enabling a user to wear the VR glasses, enabling left side lenses of the VR glasses to display left views, and enabling right side lenses of the VR glasses to display right views.
Preferably, the image display control in the step two is two equal-sized image display controls;
and the two equal-sized image display controls are loaded on the display end.
Preferably, the display end in the step two is a mobile phone display end.
Preferably, in the second step, a three-dimensional virtual space model is constructed based on the first step, so that the visual field transformation is realized according to the rotation information of the head of the user at the fixed point; the specific process is as follows:
step two, splicing a plurality of images at the same viewpoint shot by the step one-to-one binocular camera to obtain a panoramic picture, reading the pixel length of the panoramic picture, dividing the panoramic picture into 360 parts according to the length, wherein each part is a unit angle;
the current left and right views seen by the VR glasses wearer are subgraphs;
secondly, calculating the initial coordinates of a new sub-image according to the rotation angle of the x axis of the mobile phone when the VR glasses wearer rotates the head left and right to watch the panoramic image;
step two, intercepting a new sub-image according to the initial coordinates of the new sub-image and the width and the height of the new sub-image, loading the new sub-image into an image display control, displaying the new sub-image on a mobile phone display end, and enabling a VR (virtual reality) eyeglass wearer to see the new sub-image.
Preferably, in the second step, the rotation angle of the x axis of the mobile phone is detected by using a built-in gyroscope of the mobile phone.
Preferably, the gyroscope in the third step detects the head state of the user, and the position switching function of the virtual viewpoint is realized by combining the head state of the user with the bluetooth controller; the specific process is as follows:
initializing the current viewpoint position to be 11, and reading gyroscope data to judge the direction right in front of the current viewpoint;
and step three, based on the step three, setting two control keys in the Bluetooth controller to respectively control the forward, backward, leftward and rightward of the virtual viewpoint.
Preferably, in the third step, the current viewpoint position is initialized to 11, and the gyroscope data is read to determine the direction right in front of the current viewpoint; the specific process is as follows:
when the head angle state of the user is smaller than or equal to 45 degrees or larger than 315 degrees, the right front of the current viewpoint is considered to be a 21-position point, and when the mobile phone display receives a forward instruction, a picture of the 21-position point is loaded and displayed;
when the head angle state of the user is in the (45, 135) interval, the front of the current viewpoint is considered to be 10 position points, and when the mobile phone display end executes the forward instruction, the 10 position point picture is loaded and displayed;
when the head angle state of the user is in the (135, 225) interval, the front of the current viewpoint is considered to be a 01 position point, and when the mobile phone display end executes a forward instruction, a 01 position point picture is loaded and displayed;
and when the head angle state of the user is in the (225, 315) interval, the current viewpoint is considered to be a 12-position point right in front, and when the mobile phone display end executes a forward instruction, the 12-position point picture is loaded and displayed.
Preferably, in the third step, based on the first step, two control keys are set in the bluetooth controller to respectively control the forward, backward, leftward and rightward of the virtual viewpoint; the specific process is as follows:
the initial virtual position of the mobile phone display end is at the 11 position point, the initial angle of the gyroscope is 0 degrees, the gyroscope points to the 21 position point right ahead, when the controller presses the forward control, the number of the position point is increased by 1, namely the new position point is changed into 21, and at the moment, the display end loads the panoramic picture corresponding to the position point 21 to the server end for displaying; the number of the position point lines is reduced by 1 to be 01 when the controller presses down the backward, and the mobile phone display end loads and displays the panoramic picture corresponding to the position point 01;
the initial virtual position of the mobile phone display end is at the 11 position point, and the mobile phone display end is advanced to the left front direction, so that the following is realized:
firstly, a user rotates 90 degrees leftwards, and the gyroscope detects that the head rotates leftwards, and at the moment, the front view point of the position point 11 is changed into 10 from the position point 21;
secondly, the Bluetooth controller presses down the forward control to indicate the display end to forward, at the moment, the number of columns of the position points is reduced by 1 to be 10, and the display end loads and displays panoramic pictures under the 10 position points to realize forward left of the virtual viewpoint;
the initial virtual position of the mobile phone display end is at the 11 position point, and the mobile phone display end is advanced to the right front direction specifically as follows:
firstly, a user rotates 90 degrees to the right, and the gyroscope detects that the head rotates to the right, and at the moment, the front view point of the position point 11 is changed into 12 from the position point 21;
and secondly, the Bluetooth controller presses down the forward control to indicate the display end to forward, at the moment, the column number of the position points is increased by 1 to become 12, and the display end loads and displays the panoramic picture under the 10 position points to realize forward rightward of the virtual viewpoint.
Preferably, a panorama based indoor virtual roaming implementation system is used for performing a panorama based indoor virtual roaming implementation method of one of claims 1 to 9.
The beneficial effects of the invention are as follows:
the invention aims at quickly constructing a virtual environment through an indoor panoramic image to realize indoor virtual roaming. Virtual roaming is achieved herein by simulating the process of human eyes to observe a real scene using binocular parallax principles. Firstly, a binocular system is built through a pair of cameras to shoot left and right eye views, and the left and right eye views are respectively displayed by an Android mobile phone. When a user observes an image displayed on the mobile phone through the VR glasses, the left view and the right view are respectively projected to the left eye and the right eye of the user, and the brain of the user fuses the views seen by the left eye and the right eye so as to generate a stereoscopic impression. And then continuously shooting panoramic images at a plurality of viewpoints through a specified path in advance, when a user wears VR glasses, detecting the head state of the user by using a sensor on a display mobile phone, and displaying views at different view angles according to the states, so that roaming observation of virtual scenes on the specified path is realized, a more lifelike indoor virtual space is constructed, and roaming in the virtual space is realized.
Drawings
FIG. 1 is a schematic diagram of an indoor virtual roaming implementation method based on a panorama;
fig. 2 is a binocular parallax principle diagram;
FIG. 3 is a diagram of a mobile phone coordinate system;
FIG. 4 is a view point jump mode diagram;
FIG. 5 is a view point switching mode diagram;
fig. 6 is a diagram showing a display interface of the mobile phone.
Detailed Description
The first embodiment is as follows: the indoor virtual roaming implementation method based on the panoramic image in the embodiment comprises the following specific processes:
the invention mainly researches that the indoor pictures are utilized to construct the virtual space, and develops the application on the Android platform to realize indoor virtual space roaming, and a specific implementation system is shown in figure 1. Since the virtual space is built based on panoramic images, a large amount of panoramic image resources are needed to build virtual environments with different viewpoints when browsing the virtual space. If the picture resources are integrated into the application program of the display mobile phone, the problems of increased program memory overhead, reduced program fluency and the like are caused, so that a dynamic image resource loading mode is adopted to construct a virtual space. Firstly, images at all viewpoints are acquired by a binocular system on a designated path, and a plurality of images at the same viewpoint are spliced into a panoramic image by an image splicing technology and stored in a server. The mobile phone display end embedded in the VR glasses requests the server to load the panoramic image under the corresponding view point in real time according to the position moving instruction of the Bluetooth control handle, and displays the picture under the corresponding view point according to the virtual space construction rule, so that the view point in the virtual space is randomly switched, and the virtual space roaming function is realized.
Step one, constructing a three-dimensional virtual space model based on an image and binocular parallax principle;
step two, constructing a three-dimensional virtual space model based on the step one, and realizing visual field transformation according to head rotation information (left-right rotation head around-the-eye surrounding information) of a user at a fixed point;
and thirdly, continuously shooting panoramic images under a plurality of viewpoints according to a specified path in advance, detecting the head state of a user by using a gyroscope, and realizing the position switching function of the virtual viewpoints in a mode of combining the head state of the user with a Bluetooth controller.
The second step can browse the environment on one point, the third step realizes walking in the virtual environment, and can be switched to other viewpoints, namely, a Bluetooth controller is added to control the position to move on the basis of the second step, and the second step is that the position cannot move on one point;
the second embodiment is as follows: the first difference between the present embodiment and the specific embodiment is that in the first step, a three-dimensional virtual space model is constructed based on the image and binocular parallax principle; the specific process is as follows:
the virtual space construction is realized by adopting the binocular parallax principle, and the binocular parallax principle is shown in figure 2. Because the eyes of a person have a certain distance in the horizontal direction, when the eyes watch a real object in a three-dimensional environment, images seen by the left eye and the right eye have a certain difference, and after receiving the two images with certain difference, the brain of the person fuses the two images through the optic nerve, so that a stereoscopic impression is generated. Based on the principle, the specific method for constructing the virtual space is as follows:
step one, using binocular cameras to simulate the eyes of a person to respectively shoot left and right (eye) views with certain parallax at each viewpoint;
step two, respectively transmitting left and right views shot by the step one-to-one binocular camera to an image display control, and respectively displaying left and right (eye) views on a display end (on a mobile phone screen);
step three, embedding the mobile phone into VR glasses, enabling a user to wear the VR glasses, enabling a left lens of the VR glasses to display a left view, and enabling a right lens of the VR glasses to display a right view;
when a user browses the virtual space by using the VR glasses, the VR glasses respectively present left and right views to the left and right eyes of the user through the light path, and the brain of the user fuses the views seen by the left and right eyes so as to generate a stereoscopic impression.
Other steps and parameters are the same as in the first embodiment.
And a third specific embodiment: the first embodiment is different from the second embodiment in that the image display control in the first step is two equal-sized image display controls;
the two equal-sized image display controls are loaded on a display end (on a mobile phone screen).
And the display end loads two equal-sized image display controls on the mobile phone display end through the picture display function of the Android platform.
Other steps and parameters are the same as in the first or second embodiment.
The specific embodiment IV is as follows: the difference between the present embodiment and the first to third embodiments is that the display end in the first step is a mobile phone display end.
Other steps and parameters are the same as in one to three embodiments.
Fifth embodiment: the difference between the embodiment and the first to fourth embodiments is that the step two is based on the step one to construct a three-dimensional virtual space model, so as to realize the visual field transformation according to the head rotation information (left and right rotation head looking around information) of the user at the fixed point; the specific process is as follows:
the second step can browse the environment on one point, the third step realizes walking in the virtual environment, and can be switched to other viewpoints, namely, a Bluetooth controller is added to control the position to move on the basis of the second step, and the second step is that the position cannot move on one point;
in the real world, the user can watch objects in different directions by rotating the head, and in the virtual reality application, the virtual view point is changed when the head of the user rotates, and the user sees different view point scenes, so that the immersion of the user is enhanced. According to the invention, an Android mobile phone and VR glasses are used for constructing a virtual system, a gyroscope carried by the mobile phone is used for detecting the head state of a user, and corresponding picture resources are displayed according to the head rotation state of the user, so that the transformation of virtual viewpoints along with the rotation of the head of the user is realized.
The gyro sensor can detect the deflection angle of the mobile phone along the x, y and z directions under the mobile phone coordinate system, and the mobile phone coordinate system is shown in fig. 3. When the user wears the VR glasses, the y axis of the mobile phone coordinate system is parallel to the straight line where the eyes are located, so that the user rotates the head left and right, which is equivalent to the rotation of the mobile phone around the x axis. Therefore, when the left and right rotation of the head is detected, the invention adopts the built-in gyroscope of the mobile phone to read the rotation angle on the x axis of the mobile phone and normalizes the rotation angle to be within the interval of (0, 360), and the corresponding display subgraph in the panoramic image is selected according to the size of the rotation angle, thereby realizing the viewpoint switching of head tracking control.
The procedure for controlling the display sub-graph according to the x-axis rotation angle of the mobile phone is as follows:
step two, splicing a plurality of images at the same viewpoint shot by the step one-to-one binocular camera to obtain a panoramic picture, reading the pixel length of the panoramic picture, dividing the panoramic picture into 360 parts according to the length, wherein each part is a unit angle;
the current left and right views seen by the VR glasses wearer are subgraphs;
calculating the translation variable quantity of the display sub-graph corresponding to each unit angle of rotation of the panoramic graph;
secondly, calculating the initial coordinates (coordinates of the upper left corner or the lower left corner of the sub-graph) of a new sub-graph according to the rotation angle of the x axis of the mobile phone when the VR eyeglass wearer rotates the head left and right to watch the panoramic graph;
step two, intercepting a new sub-image according to the initial coordinates of the new sub-image and the width and the height of the new sub-image, loading the new sub-image into an image display control, displaying the new sub-image on a mobile phone display end (on a mobile phone screen), and enabling a VR (virtual reality) eyeglass wearer to see the new sub-image.
Through the steps, the mobile phone x-axis rotation angle change of the mobile phone is detected in real time by using the mobile phone gyroscope at the display end, and the display interface is changed to display the subgraph according to the transformation, so that the virtual viewpoint is also given to the function along with the transformation when the user rotates the head, and the experience immersion is improved.
In practical development, the length of a panoramic image is limited, and the angle of the left or right rotation of a person is infinite, so that invoking the subgraph in the above manner can lead the subgraph to exceed the panoramic image range, thereby causing the application to flash back. To this end, the sub-graph loading is implemented by using a panorama view with a view point jump larger than 360 degrees, and the specific implementation is shown in fig. 4. The panoramic view length of fig. 4 is greater than 360 degrees, the images a and B are identical subgraphs in the panoramic image, if the currently displayed subgraph is the image a and the user's head continues to rotate leftwards, the view point is jumped to the image B, and then the head tracking view point switching operation is continued; and if the currently displayed sub-graph is the graph B and the user head continues to rotate rightwards, switching the view point to the sub-graph A and then performing head tracking view point switching operation. The problem that the panoramic picture is limited in length due to the fact that a user infinitely rotates left and right is solved well through the method, jump points are natural in transition, and good visual effects are achieved.
Other steps and parameters are the same as in one to four embodiments.
Specific embodiment six: the difference between the present embodiment and the first to fifth embodiments is that in the second step, the rotation angle of the x axis of the mobile phone is detected by using a built-in gyroscope of the mobile phone (the rotation angle of the x axis of the mobile phone is identical to the rotation unit angle of the panorama).
Other steps and parameters are the same as in one of the first to fifth embodiments.
Seventh embodiment: the third step is to continuously shoot panoramic images at a plurality of viewpoints according to a specified path in advance, then the gyroscope detects the head state of the user, and the position switching function of the virtual viewpoints is realized by combining the head state of the user with the Bluetooth controller; the specific process is as follows:
the second step can browse the environment on one point, the third step realizes walking in the virtual environment, and can be switched to other viewpoints, namely, a Bluetooth controller is added to control the position to move on the basis of the second step, and the second step is that the position cannot move on one point;
the 360-degree circular view under the fixed point of the virtual environment can be realized by detecting the rotation state of the head of the user to control the visual angle switching, namely the fixed point browsing of the virtual environment is realized. When a person visits an indoor environment in the real world, the person can go forward in the directions of front, back, left, right and the like to visit the indoor environment at different viewpoints. In order to enable a user to browse a virtual space under different viewpoints in a virtual application, the virtual viewpoints should also be moved in the front-back, left-right directions and the like to improve immersion, i.e. roaming in the virtual space can be realized.
At present, the movement of the view point in four directions of front, back, left and right is mainly realized.
The judgment of whether the front of the current viewpoint is directed in the row direction or the column direction is achieved by detecting angle data of the user's head rotation state, as shown in fig. 5 in detail:
step three, initializing the current viewpoint position to be 11 in fig. 5, and reading gyroscope data to judge the direction right in front of the current viewpoint;
and step three, based on the step three, setting two control keys in the Bluetooth controller to respectively control the forward, backward, leftward and rightward of the virtual viewpoint.
Other steps and parameters are the same as in one of the first to sixth embodiments.
Eighth embodiment: the difference between the present embodiment and one of the first to seventh embodiments is that in the third step, the current viewpoint position is initialized to 11 in fig. 5, and the gyroscope data is read to determine the direction directly in front of the current viewpoint; the specific process is as follows:
when the head angle state of the user is smaller than or equal to 45 degrees or larger than 315 degrees, the right front of the current viewpoint is considered to be a 21-position point, and when the mobile phone display receives a forward instruction, a picture of the 21-position point is loaded and displayed;
when the head angle state of the user is in the (45, 135) interval, the front of the current viewpoint is considered to be 10 position points, and when the mobile phone display end executes the forward instruction, the 10 position point picture is loaded and displayed;
when the head angle state of the user is in the (135, 225) interval, the front of the current viewpoint is considered to be a 01 position point, and when the mobile phone display end executes a forward instruction, a 01 position point picture is loaded and displayed;
and when the head angle state of the user is in the (225, 315) interval, the current viewpoint is considered to be a 12-position point right in front, and when the mobile phone display end executes a forward instruction, the 12-position point picture is loaded and displayed.
Other steps and parameters are the same as those of one of the first to seventh embodiments.
Detailed description nine: the difference between the present embodiment and one of the first to eighth embodiments is that, based on the third step, two control keys are set in the bluetooth controller to respectively control the forward, backward, leftward and rightward of the virtual viewpoint; the specific process is as follows:
the initial virtual position of the mobile phone display end is at the 11 position point shown in fig. 5, the initial angle of the gyroscope is 0 degrees, the gyroscope points to the 21 position point right ahead, when the controller presses the forward control, the number of the position point is increased by 1, namely the new position point is changed into 21, and at the moment, the display end loads the panoramic picture corresponding to the position point 21 to the server end for displaying; the number of the position point lines is reduced by 1 to be 01 when the controller presses down the backward, and the mobile phone display end loads and displays the panoramic picture corresponding to the position point 01;
the initial virtual position of the mobile phone display end is at the 11 position point, and the forward left progress shown in fig. 5 is specifically realized as follows:
firstly, a user rotates 90 degrees leftwards, and the gyroscope detects that the head rotates leftwards, and at the moment, the front view point of the position point 11 is changed into 10 from the position point 21;
secondly, the Bluetooth controller presses down the forward control to indicate the display end to forward, at the moment, the number of columns of the position points is reduced by 1 to be 10, and the display end loads and displays panoramic pictures under the 10 position points to realize forward left of the virtual viewpoint;
the initial virtual position of the mobile phone display end is at the 11 position point, and the forward right direction is specifically realized as follows in fig. 5:
firstly, a user rotates 90 degrees to the right, and the gyroscope detects that the head rotates to the right, and at the moment, the front view point of the position point 11 is changed into 12 from the position point 21;
and secondly, the Bluetooth controller presses down the forward control to indicate the display end to forward, at the moment, the column number of the position points is increased by 1 to become 12, and the display end loads and displays the panoramic picture under the 10 position points to realize forward rightward of the virtual viewpoint.
Other steps and parameters are the same as in one to eight of the embodiments.
Detailed description ten: the embodiment relates to a panorama-based indoor virtual roaming implementation system, which is used for executing the panorama-based indoor virtual roaming implementation method of one of the first to ninth embodiments.
The following examples are used to verify the benefits of the present invention:
embodiment one:
to verify the feasibility of the method presented herein, an indoor experimental scenario needs to be selected for experimental verification. The verification site finally selected by the algorithm is the living room of the author living place due to epidemic reasons. Due to the limitation of experimental scene conditions, 5×3 sampling points shown in fig. 5 are arranged indoors to shoot panoramic pictures of different viewpoints, the collected panoramic pictures are uniquely identified by rows and columns of the panoramic pictures, the panoramic pictures are stored in a built simple server, and a display end loads corresponding panoramic pictures by using unique identifiers. In order to realize continuous and vivid switching between viewpoints, the sampling point interval selected in the text is 30cm, and the stride size of adults in walking is met. Finally, the display effect is achieved through programming development on the display mobile phone, as shown in fig. 6, the display mobile phone is embedded into the VR glasses, and a virtual space consistent with a real scene can be seen more realistically when the VR glasses are worn. The virtual view point can be smoothly switched under the cooperation of the head rotation state detection and the Bluetooth handle control function, so that roaming in a virtual space is realized. Experiments prove that the method for realizing indoor virtual roaming is feasible by utilizing the indoor panoramic image to quickly construct a virtual environment.
The present invention is capable of other and further embodiments and its several details are capable of modification and variation in light of the present invention, as will be apparent to those skilled in the art, without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (6)

1. An indoor virtual roaming realization method based on panoramic pictures is characterized in that: the method comprises the following specific processes:
step one, constructing a three-dimensional virtual space model based on an image and binocular parallax principle;
step two, constructing a three-dimensional virtual space model based on the step one, and realizing visual field transformation according to the rotation information of the head of the user at a fixed point;
detecting the head state of the user by the gyroscope, and realizing the position switching function of the virtual viewpoint in a mode of combining the head state of the user with the Bluetooth controller;
the second step is to construct a three-dimensional virtual space model based on the first step, so as to realize visual field transformation according to the rotation information of the head of the user at a fixed point; the specific process is as follows:
step two, splicing a plurality of images at the same viewpoint shot by the step one-to-one binocular camera to obtain a panoramic picture, reading the pixel length of the panoramic picture, dividing the panoramic picture into 360 parts according to the length, wherein each part is a unit angle;
the current left and right views seen by the VR glasses wearer are subgraphs;
secondly, calculating the initial coordinates of a new sub-image according to the rotation angle of the x axis of the mobile phone when the VR glasses wearer rotates the head left and right to watch the panoramic image;
step two, intercepting a new sub-image according to the initial coordinates of the new sub-image and the width and the height of the new sub-image, loading the new sub-image into an image display control, displaying the new sub-image at a mobile phone display end, and enabling a VR (virtual reality) eyeglass wearer to see the new sub-image;
the gyroscope detects the head state of the user, and the position switching function of the virtual viewpoint is realized in a mode of combining the head state of the user with the Bluetooth controller; the specific process is as follows:
initializing the current viewpoint position to be 11, and reading gyroscope data to judge the direction right in front of the current viewpoint;
step three, based on the step three, two control keys are arranged in the Bluetooth controller to respectively control the forward, backward, leftward and rightward of the virtual viewpoint;
initializing the current viewpoint position to be 11 in the step three, and reading gyroscope data to judge the direction right in front of the current viewpoint; the specific process is as follows:
when the head angle state of the user is smaller than or equal to 45 degrees or larger than 315 degrees, the right front of the current viewpoint is considered to be a 21-position point, and when the mobile phone display receives a forward instruction, a picture of the 21-position point is loaded and displayed;
when the head angle state of the user is in the (45, 135) interval, the front of the current viewpoint is considered to be 10 position points, and when the mobile phone display end executes the forward instruction, the 10 position point picture is loaded and displayed;
when the head angle state of the user is in the (135, 225) interval, the front of the current viewpoint is considered to be a 01 position point, and when the mobile phone display end executes a forward instruction, a 01 position point picture is loaded and displayed;
when the head angle state of the user is in the (225, 315) interval, the right front of the current viewpoint is considered to be a 12-position point, and when the mobile phone display end executes a forward instruction, a 12-position point picture is loaded and displayed;
in the third step, based on the first step, two control keys are arranged in the Bluetooth controller to respectively control the forward, backward, leftward and rightward of the virtual viewpoint; the specific process is as follows:
the initial virtual position of the mobile phone display end is at the 11 position point, the initial angle of the gyroscope is 0 degrees, the gyroscope points to the 21 position point right ahead, when the controller presses the forward control, the number of the position point is increased by 1, namely the new position point is changed into 21, and at the moment, the display end loads the panoramic picture corresponding to the position point 21 to the server end for displaying; the number of the position point lines is reduced by 1 to be 01 when the controller presses down the backward, and the mobile phone display end loads and displays the panoramic picture corresponding to the position point 01;
the initial virtual position of the mobile phone display end is at the 11 position point, and the mobile phone display end is advanced to the left front direction, so that the following is realized:
firstly, a user rotates 90 degrees leftwards, and the gyroscope detects that the head rotates leftwards, and at the moment, the front view point of the position point 11 is changed into 10 from the position point 21;
secondly, the Bluetooth controller presses down the forward control to indicate the display end to forward, at the moment, the number of columns of the position points is reduced by 1 to be 10, and the display end loads and displays panoramic pictures under the 10 position points to realize forward left of the virtual viewpoint;
the initial virtual position of the mobile phone display end is at the 11 position point, and the mobile phone display end is advanced to the right front direction specifically as follows:
firstly, a user rotates 90 degrees to the right, and the gyroscope detects that the head rotates to the right, and at the moment, the front view point of the position point 11 is changed into 12 from the position point 21;
and secondly, the Bluetooth controller presses down the forward control to indicate the display end to forward, at the moment, the column number of the position points is increased by 1 to become 12, and the display end loads and displays the panoramic picture under the 10 position points to realize forward rightward of the virtual viewpoint.
2. The indoor virtual roaming implementation method based on the panorama as claimed in claim 1, wherein: in the first step, a three-dimensional virtual space model is constructed based on the image and binocular parallax principle; the specific process is as follows:
shooting left and right views under each viewpoint by using a binocular camera;
step one, respectively transmitting left and right views shot by the step one-to-one binocular camera to an image display control, and respectively displaying the left and right views at a display end;
step one, embedding the mobile phone into VR glasses, enabling a user to wear the VR glasses, enabling left side lenses of the VR glasses to display left views, and enabling right side lenses of the VR glasses to display right views.
3. The indoor virtual roaming implementation method based on panorama according to claim 1 or 2, wherein: the first image display control is two equal-sized image display controls;
and the two equal-sized image display controls are loaded on the display end.
4. The indoor virtual roaming implementation method based on panorama according to claim 3, wherein: and in the step two, the display end is a mobile phone display end.
5. The indoor virtual roaming implementation method based on the panorama as claimed in claim 4, wherein: and in the second step, the rotation angle of the x axis of the mobile phone is detected by adopting a built-in gyroscope of the mobile phone.
6. An indoor virtual roaming realization system based on panoramic images is characterized in that: the system is configured to perform a panorama-based indoor virtual roaming implementation method of one of claims 1 to 5.
CN202110564925.9A 2021-05-24 2021-05-24 Indoor virtual roaming realization method and realization system based on panorama Active CN113253843B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110564925.9A CN113253843B (en) 2021-05-24 2021-05-24 Indoor virtual roaming realization method and realization system based on panorama

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110564925.9A CN113253843B (en) 2021-05-24 2021-05-24 Indoor virtual roaming realization method and realization system based on panorama

Publications (2)

Publication Number Publication Date
CN113253843A CN113253843A (en) 2021-08-13
CN113253843B true CN113253843B (en) 2023-05-09

Family

ID=77183875

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110564925.9A Active CN113253843B (en) 2021-05-24 2021-05-24 Indoor virtual roaming realization method and realization system based on panorama

Country Status (1)

Country Link
CN (1) CN113253843B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117234340A (en) * 2023-11-14 2023-12-15 荣耀终端有限公司 Method and device for displaying user interface of head-mounted XR device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228538A (en) * 2016-07-12 2016-12-14 哈尔滨工业大学 Binocular vision indoor orientation method based on logo
JP2019207714A (en) * 2017-05-12 2019-12-05 株式会社コロプラ Information processing method, computer and program

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4148671B2 (en) * 2001-11-06 2008-09-10 ソニー株式会社 Display image control processing apparatus, moving image information transmission / reception system, display image control processing method, moving image information transmission / reception method, and computer program
KR101669926B1 (en) * 2014-02-03 2016-11-09 (주)에프엑스기어 User view point related image processing apparatus and method thereof
KR101675567B1 (en) * 2016-03-29 2016-11-22 주식회사 투아이즈테크 Apparatus and system for acquiring panoramic images, method using it, computer program and computer readable recording medium for acquiring panoramic images
KR20180060236A (en) * 2016-11-28 2018-06-07 엘지전자 주식회사 Mobile terminal and operating method thereof
US10368047B2 (en) * 2017-02-15 2019-07-30 Adone Inc. Six-degree of freedom video playback of a single monoscopic 360-degree video
CN111405262B (en) * 2019-01-02 2021-09-14 中国移动通信有限公司研究院 Viewpoint information generation method, apparatus, system, device and medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228538A (en) * 2016-07-12 2016-12-14 哈尔滨工业大学 Binocular vision indoor orientation method based on logo
JP2019207714A (en) * 2017-05-12 2019-12-05 株式会社コロプラ Information processing method, computer and program

Also Published As

Publication number Publication date
CN113253843A (en) 2021-08-13

Similar Documents

Publication Publication Date Title
CN106504339A (en) Historical relic 3D methods of exhibiting based on virtual reality
EP3106963B1 (en) Mediated reality
EP2485119A2 (en) Spatially-correlated multi-display human-machine interface
US20170195664A1 (en) Three-dimensional viewing angle selecting method and apparatus
US11340697B2 (en) System and a method to create extended reality using wearables and virtual environment set
JP6039594B2 (en) Information processing apparatus and information processing method
CN104536579A (en) Interactive three-dimensional scenery and digital image high-speed fusing processing system and method
CN103744518A (en) Stereoscopic interaction method, stereoscopic interaction display device and stereoscopic interaction system
TW201903565A (en) Method, device and non-volatile computer readable storage medium for displaying a bullet
CN110568923A (en) unity 3D-based virtual reality interaction method, device, equipment and storage medium
JP2017093946A (en) Image display method and program
CN108830944B (en) Optical perspective three-dimensional near-to-eye display system and display method
CN108830941A (en) 3D guide method
CN102262705A (en) Virtual reality method of actual scene
WO2017062730A1 (en) Presentation of a virtual reality scene from a series of images
CN101477702A (en) Built-in real tri-dimension driving method for computer display card
Song et al. An immersive VR system for sports education
CN113253843B (en) Indoor virtual roaming realization method and realization system based on panorama
CN106125927B (en) Image processing system and method
Saggio et al. Augmented reality for restoration/reconstruction of artefacts with artistic or historical value
CN109908576B (en) Information module presenting method and device, electronic equipment and storage medium
Datta et al. An exploratory analysis of head mounted displays for vr applications
CN112891940A (en) Image data processing method and device, storage medium and computer equipment
KR101473234B1 (en) Method and system for displaying an image based on body tracking
Garcia et al. Modifying a game interface to take advantage of advanced I/O devices

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant