CN111354087A - Positioning method and reality presentation device - Google Patents

Positioning method and reality presentation device Download PDF

Info

Publication number
CN111354087A
CN111354087A CN201811612618.8A CN201811612618A CN111354087A CN 111354087 A CN111354087 A CN 111354087A CN 201811612618 A CN201811612618 A CN 201811612618A CN 111354087 A CN111354087 A CN 111354087A
Authority
CN
China
Prior art keywords
image
images
virtual
real
environment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811612618.8A
Other languages
Chinese (zh)
Inventor
周永明
苏上钦
吴孟豪
朱峰森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Future City Co ltd
Original Assignee
Future City Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Future City Co ltd filed Critical Future City Co ltd
Priority to CN201811612618.8A priority Critical patent/CN111354087A/en
Publication of CN111354087A publication Critical patent/CN111354087A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

A positioning method comprises the steps of collecting a plurality of first images of a real environment, and constructing a virtual environment corresponding to the real environment according to the plurality of first images; a reality presentation device obtains a second image of the real environment; calculating an initial virtual position in the virtual environment corresponding to the second image according to the plurality of first images and the second image; and when a specific application program of the reality presentation device is started, the reality presentation device displays the virtual environment from a visual angle of the initial virtual position; wherein the initial virtual position corresponds to an initial real position in the real environment, and the reality presentation device captures the second image at the initial real position.

Description

Positioning method and reality presentation device
Technical Field
The present invention relates to a positioning method and a real display device, and more particularly, to a positioning method and a real display device capable of calculating an initial virtual position corresponding to a real position.
Background
With the development and progress of science and technology, the demand for human-computer interaction is gradually increased. Human-computer interaction technologies such as motion sensing games, Virtual Reality (VR), Augmented Reality (AR), Mixed Reality (MR), extended reality (XR), etc. are gradually gaining favor in the market due to their physiological and entertainment effects.
A user or player may play video games in a virtual environment that may be used to simulate a real environment. In the prior art, for a video game that requires a virtual environment to reproduce a real environment, a user/player starts the video game by standing at a predetermined (predetermined) virtual position in the virtual world, which is not related to the position of the user/player in the real world.
Disclosure of Invention
Therefore, the present invention is directed to a positioning method and a reality presentation apparatus.
The embodiment of the invention provides a positioning method, which comprises the steps of collecting a plurality of first images of a real environment, and constructing a virtual environment corresponding to the real environment according to the plurality of first images; a reality presentation device obtains a second image of the real environment; calculating an initial virtual position in the virtual environment corresponding to the second image according to the plurality of first images and the second image; and when a specific application program of the reality presentation device is started, the reality presentation device displays the virtual environment from a visual angle of the initial virtual position; wherein the initial virtual position corresponds to an initial real position in the real environment, and the reality presentation device captures the second image at the initial real position.
Another embodiment of the present invention provides a reality presentation apparatus that displays a virtual environment to a user, the virtual environment being constructed from a plurality of first images captured from a real environment, the reality presentation apparatus including an image capture device for capturing a second image from the real environment; a processing unit for performing the steps of: constructing the virtual environment corresponding to the real environment according to the plurality of first images; calculating an initial virtual position corresponding to the second image in the virtual environment according to the plurality of first images and the second image; and a display screen for displaying the virtual environment from a perspective of the initial virtual position when a particular application of the reality presentation device is launched; wherein the initial virtual position corresponds to an initial real position in the real environment, and the reality presentation device captures the second image at the initial real position.
Another embodiment of the present invention provides a system for displaying a virtual environment to a user, the virtual environment being constructed from a plurality of first images captured from a real environment, comprising a reality presentation device including an image capture device for capturing a second image from the real environment; and a display screen for displaying the virtual environment from a perspective of an initial virtual position when a particular application of the reality presentation device is launched; and a remote computing device for performing the steps of: constructing the virtual environment corresponding to the real environment according to the plurality of first images; calculating the initial virtual position corresponding to the second image in the virtual environment according to the plurality of first images and the second image; wherein the initial virtual position corresponds to an initial real position in the real environment, and the reality presentation device captures the second image at the initial real position.
Drawings
Fig. 1 is a schematic diagram of a reality presentation apparatus according to an embodiment of the invention.
Fig. 2 is an appearance schematic diagram of the reality presentation apparatus of fig. 1.
Fig. 3 is a schematic diagram of a process according to an embodiment of the invention.
FIG. 4 is a diagram of a system according to an embodiment of the invention.
Description of reference numerals:
10. 40 reality presentation device
12. 42 image capturing device
14 processing unit
16. 46 display screen
30 flow path
302 to 310 steps
41 System
43 remote computing device
IMG1、IMG2Image forming method
VEN virtual environment
Detailed Description
Fig. 1 is a schematic diagram of a reality presentation apparatus 10 according to an embodiment of the invention, and fig. 2 is an external view of the reality presentation apparatus 10. The Reality presentation device 10 may be a Virtual Reality (VR) device, an Augmented Reality (AR) device, a Mixed Reality (MR) device, or an Extended Reality (XR) device. Unlike conventional reality rendering devices, in addition to simply constructing a virtual environment, the reality rendering device 10 can calculate a virtual location in the virtual environment that corresponds to a real location in a real environment. The reality presentation apparatus 10 includes an image capturing device 12, a processing unit 14, and a display screen 16. The image capturing device 12 may include a lens and a photosensitive pixel array for capturing images. The image captured by the image capturing device 12 may include a two-dimensional (2D) image such as an RGB image, or a three-dimensional (3D) image including depth information, which may be obtained by infrared rays. The processing unit 14 may be, but is not limited to, an Application processor, a microprocessor, or an Application-specific integrated circuit (ASIC). The display screen 16 is used to display a real environment, a virtual environment, or a combination thereof to a user/player.
FIG. 3 is a schematic diagram of a process 30 according to an embodiment of the present invention. The process 30 includes the following steps:
step 302: a plurality of first images of a real environment are captured.
Step 304: and constructing a virtual environment corresponding to the real environment according to the plurality of first images.
Step 306: capturing a second image of the real environment.
Step 308: and calculating an initial virtual position corresponding to the second image in the virtual environment according to the plurality of first images and the second image.
Step 310: when a specific application program of the reality presentation device is started, the reality presentation device displays the virtual environment from a visual angle of the initial virtual position.
In steps 302 and 304 (which may be in an off-line stage), the image capturing apparatus 12 captures a plurality of first images IMG from a real environment REN1And according to a plurality of first images IMG1A virtual environment VEN is constructed corresponding to the real environment REN. In an embodiment, the virtual environment VEN may be built by the processing unit 14. The real environment REN may be an office, a living room, a conference room, etc. in real life. The virtual environment VEN may include a plurality of VR images, such as VR 360 ° images, displayed by the reality presentation device 10, such that when a user/player wears the reality presentation device 10 as shown in fig. 2, the user/player may visually receive the plurality of VR images and feel as if immersed in the virtual environment VEN. In another embodiment, part or all of the plurality of first images IMG1The image may be collected by other components besides the image capturing device 12, or may be collected by a cloud database or the internet.
In one embodiment, a user/player may move around the real environment REN, and the image capturing device 12 may take multiple photographs, i.e., multiple first images IMG1So that the processing unit 14 can obtain a plurality of IMGs based on the plurality of first images1A virtual environment VEN is constructed corresponding to the real environment REN. In one embodiment, the user/player can stand at a plurality of predetermined positions in the real environment REN, so that the image capturing device 12 can take a plurality of photos at different viewing angles to capture a plurality of first images IMG1The processing unit 14 can be used to generate a plurality of IMGs based on the plurality of first images1A virtual environment VEN is constructed corresponding to the real environment REN. According to a plurality of first images IMG captured from a real environment REN1The operation details for constructing the virtual environment VEN corresponding to the real environment REN are known to those skilled in the art, and are not described herein again.
In step 306, after the virtual environment VEN is constructed, the image capturing device 12 captures the real environment RA second image IMG of EN2It may be in a real-time stage. Step 306 may be performed when the user/player starts to enter the virtual environment VEN, such as when the user/player powers on the reality presentation device 10, or when the user/player starts a specific software application (such as a video game involving virtual reality), step 306 is performed.
Step 308 is to determine a plurality of IMGs based on the plurality of first images1And a second image IMG2Computing the IMG corresponding to the second image in the virtual environment VEN2An initial virtual position VPI
In one embodiment, step 308 may be performed by processing unit 14.
The details of the operation of step 308 are known to those skilled in the art. For example, the IMG can be used to compare the second image2And a plurality of first images IMG1And obtain a plurality of first images IMG1And the second image IMG2Step 308 is performed by means of a plurality of correlation coefficients c.
For example, a plurality of first images IMG1May include a first image IMG1,1~IMG1,NThe plurality of correlation coefficients c may include a correlation coefficient c1~cN. Each correlation coefficient cnRepresenting a first image IMG1,nAnd the second image IMG2A quantitative correlation between. Coefficient of correlation cnThe larger the size, the representative IMG of the first image1,nAnd the second image IMG2The higher the correlation or degree of correlation between. In other words, the correlation coefficient cnThe larger the representative IMG1,nAnd the second image IMG2The higher the degree of correlation between.
The details of the operation of obtaining the plurality of correlation coefficients c are not limited. For example, IMG can be performed on a plurality of first images1And a second image IMG2And performing feature extraction operation. The feature extraction operation is known to those skilled in the art, and includes feature identification and occurrence count accumulation. For a specific feature, when an object in an image conforms to a specific geometric shapeThe specific features can be identified. That is, the feature identification is to identify whether the image has the specific feature, and the cumulative number of occurrences is to cumulate the number of occurrences of the feature in the image. In addition, whether a part of the image or an image object in the image conforms to the specific geometric shape can be determined, which can be implemented by using machine learning (machine learning) or computer vision (computer vision) techniques. If the distribution or the image object does conform to the specific geometric shape, it is determined that the specific feature appears once.
Alternatively, feature recognition (i.e., determining whether a feature appears in the image) and accumulation of occurrences (i.e., accumulation of occurrences of the feature in the image) may be performed on a plurality of features to obtain a quantization vector. The quantized vector includes a plurality of occurrences corresponding to a plurality of features. For example, after an image performs feature recognition and accumulation of occurrences for a first feature, a second feature and a third feature, a quantization vector corresponding to the image may be obtained. For example, the quantization vector may be [2, 3, 1], which represents that the first feature appears twice in the image, the second feature appears three times in the image, and the third feature appears once in the image. Further, for example, the first feature may conform to a circular shape, the second feature may conform to a triangular shape, and the third feature may conform to a rectangular shape.
Details of the operation of the feature extraction operation are known to those skilled in the art and will not be described herein.
In brief, IMG can be performed on a plurality of first images1Performing a feature extraction operation on a plurality of features (e.g., K features) to obtain a plurality of first quantization vectors QV1(which can be operated off-line) and IMG the second image2Performing a feature extraction operation to obtain a second quantization vector QV2(which may operate in real-time).
The plurality of first quantization vectors QV1Can comprise a pair ofCorresponding to the first image IMG1,1~IMG1,NOf the first quantization vector QV1,1~QV1,nEach first quantized vector QV1,nIncluding a plurality of occurrences of appn corresponding to the K featuresn,1~appnn,KIn which the number of occurrences appnn,kRepresented in the first image IMG1,nThe kth occurrence of appnn,kNext, the process is carried out. Mathematically, the first quantized vector QV1,nCan be expressed as QV1,n=[appnn,1~appnn,K]。
Likewise, the second quantized vector QV2Comprising a plurality of occurrences apn corresponding to the K features1~apnKAnd may be represented as QV2=[apn1~apnK]Number of occurrences apnkRepresenting the k-th feature in the second image IMG2Middle occurrence apnkNext, the process is carried out.
In one embodiment, step 308 may calculate the IMG corresponding to the first image1,1~IMG1,NCorrelation coefficient c of1~cN. In one embodiment, step 308 may calculate the IMG of the first image1,nAnd the second image IMG2Coefficient of correlation between cnIs cn=(QV1,n T·QV2)/(|QV1,n|·|QV2L) whereinT(. cndot.) is a transpose operation (transpose operation) and | cndot | is a rank operation (norm operation). Coefficient of correlation cnThe larger the representative IMG1,nAnd the second image IMG2The more relevant.
In one embodiment, step 308 may IMG from the m first images1,1~IMG1,NScreening out n screened images IMGk(m is greater than n, n is greater than 2). Corresponding to n screened images IMGkN correlation coefficients ckAre all greater than a certain threshold value TH (e.g., correlation coefficient c)kMay be greater than 0.8), and in addition, n post-screening images IMGkCorresponding to n virtual positions VP in the virtual environment VENkAnd n virtual positions VPkCorresponding to n real positions RL in the real environment RENkReality presentation apparatus10 at n real positions RLkCapturing n screened images IMGk
In addition, step 308 may be performed from the n filtered images IMGkSelecting a specific screened image IMGk *And calculating IMG corresponding to the specific screened imagek *A specific virtual position VPk *. Virtual position VPk *Corresponding to a real position RL in the real environment RENk *The real-world representation device 10 or the image capture device 12 is at the real position RLk *Capturing a first image IMGk *. Further, step 308 may be performed based on (at least one filtered image IMG)kOf) specific post-screening image IMGk *And a second image IMG2Relative to (at least one virtual position VP)kOf) virtual position VPk *A relative virtual position RVP ofk *. IMG from at least one post-screening imagekSelecting a specific screened image IMGk *For example, the number of times of correct data (inliers, data that can be described by the model) and the number of times of abnormal data (outlers, data that is far from the normal range and cannot be adapted to the mathematical model) are counted and compared to obtain the IMG from at least one screened imagekSelecting specific screened image IMGk *But is not limited thereto. In addition, how to calculate the relative virtual position RVPk *Are known to those skilled in the art, and will not be described herein. Thus, step 308 may calculate an initial virtual position VPIIs VPI=VPk *+RVPk *
In another embodiment, step 308 may obtain the IMG from the first image1,1~IMG1,NSelecting a screened image IMGn *Corresponding to the filtered image IMGn *A correlation coefficient c ofn *Is the correlation coefficient c1~cNA maximum value of (a). Specifically, when the correlation coefficient c is calculated1~cNThereafter, the processing unit 14 may compare the correlation coefficient c1~cNPerforming a sorting operation and selecting a correlation coefficient cn *So that the correlation coefficient cn *Is the correlation coefficient c1~cNThat is, the correlation coefficient cn *Can be represented as cn *=max(c1,…,cN). Briefly, step 308 may select a correlation coefficient c corresponding to the correlation coefficientn *After-screening image IMG ofn *Wherein the post-screening image IMGn *Is the first image IMG1,1~IMG1,NIntermediate and second image IMG2The most relevant first image.
Alternatively, step 308 may include IMG based on the first imagen *Obtaining a virtual position VP in a virtual environment VENn *. Virtual position VPn *Corresponding to a real position RL in the real environment RENn *The real-world representation device 10 or the image capture device 12 is at the real position RLn *Capturing a first image IMGn *. The processing unit 14 obtains the virtual position VPn *Is not limited in the manner of (a). In one embodiment, the user/player can stand at a predetermined real position RL ', and step 308 can calculate a virtual position VP ' corresponding to the real position RL '. If a first image IMG 'captured at the real position RL' is identical to the second image IMG2The most relevant first image, step 308, can be taken to obtain the virtual position VPn *Is the virtual position VP'.
Step 308 may IMG the first imagen *And a second image IMG2Computing the IMG corresponding to the second image in the virtual environment VEN2An initial virtual position VPI. Calculating an initial virtual position VPIIs not limited in the manner of (a). In one embodiment, step 308 may be performed according to the first image IMGn *And a second image IMG2Calculating the relative to virtual position VPn *A relative virtual position RVP and based on the virtual position VPn *And calculating an initial virtual position VP relative to the virtual position RVPI. Assuming a virtual position VPn *Relative virtual position RVP and initial virtual position VPICan be represented in a vector manner, which represents the coordinates in two dimensions, the initial virtual position VPICan be represented as VPI=VPn *+RVP。
In other words, the reality presenting device 10 can take a virtual position in the virtual environment VEN that corresponds to the real position, regardless of where the user/player stands in the real environment REN.
For a video game involving a virtual environment VEN corresponding to a real environment REN, the user/player may power up the reality rendering device 10 at any real location in the real environment REN, such as the real position RL, and the processing unit 14 may calculate an initial virtual position VP corresponding to the real position RL (where the user/player and the reality rendering device 10 are located)I. The reality rendering device 10 may be based on the initial virtual position VPIGenerating VR 360 DEG image so that user/player can visually receive VP from initial virtual positionIIs viewed as a virtual environment VEN as viewed from the perspective of the real position RL as is a real environment REN from the perspective of the real position RL.
In step 310, the display screen 16 may be displayed from an initial virtual position VP when a user/player launches a particular software applicationIDisplays the virtual environment VEN. A particular software application may be a video game that involves virtual reality. In other words, the display screen 16 can display a plurality of VR images corresponding to the virtual environment VEN at the initial virtual position VPIIs the viewing angle. Initial virtual position VPICorresponding to a real position RL in the real environment RENIThe real-world representation device 10 or the image capture device 12 is at the real position RLICapturing the second image IMG2. Once the user/player opens a particular software application, the user/player can see multiple VR images of the virtual environment VEN, and the user/player can be immersed in the virtual environment VEN as if it were the real location RL in the real environment RENIThe same is seen.
By executing the flow 30, the virtual environment VEN and the real environment REN experienced by the user/player are not substantially different. Thus, the immersion of the user/player can be improved.
It should be noted that the foregoing embodiments are provided to illustrate the concept of the present invention, and various modifications can be made by those skilled in the art without limiting the scope of the present invention. For example, the process 30 may be performed solely by the reality presentation device 10. That is, the steps 304 and 308 can be executed by the processing unit 14 of the reality presentation device 10, but not limited thereto, and the process 30 can be executed by a system.
Fig. 4 is a schematic diagram of a system 41 according to an embodiment of the invention. The system 41 includes a reality presentation device 40 and a remote computing device 43. The reality presentation device 40 and the remote computing device 43 may be connected to each other through a wired connection or a wireless interface. The reality presentation device 40 includes an image capture device 42 and a display screen 46. The remote computing device 43 may be a cloud computing device, an edge computing device, or a combination thereof, and both the cloud computing device and the edge computing device may be a cluster of computers or a server. The process 30 can be executed by the system 41, wherein the steps 302 and 306 can be executed by the image capturing device 42, the steps 304 and 308 can be executed by the remote computing device 43, and the step 310 can be executed by the display screen 46, and still fall within the scope of the present invention.
In summary, the present invention can calculate an initial virtual position corresponding to the real position (where the user/player and the reality presentation device are located). In contrast to the prior art, the user/player can actually start any video game involving a virtual environment in the real environment, and can see a virtual space with a virtual position (corresponding to the real position) as a perspective, in other words, the virtual environment VEN and the real environment REN seen by the user/player are not much different after the user/player wears the real rendering device. Thus, the immersion of the user/player can be improved.
The above-mentioned embodiments are merely preferred embodiments of the present invention, and all equivalent changes and modifications made by the claims of the present invention should be covered by the scope of the present invention.

Claims (18)

1. A method of positioning, comprising:
collecting a plurality of first images of a real environment, and constructing a virtual environment corresponding to the real environment according to the plurality of first images;
a reality presentation device obtains a second image of the real environment;
calculating an initial virtual position in the virtual environment corresponding to the second image according to the plurality of first images and the second image; and
when a specific application program of the reality presentation device is started, the reality presentation device displays the virtual environment from a visual angle of the initial virtual position;
wherein the initial virtual position corresponds to an initial real position in the real environment, and the reality presentation device captures the second image at the initial real position.
2. The positioning method of claim 1, further comprising:
comparing the second image with the plurality of first images to obtain a plurality of correlation coefficients between the plurality of first images and the second image;
obtaining at least one filtered image from the plurality of first images such that a correlation coefficient corresponding to the at least one filtered image is greater than a specific threshold, wherein the at least one filtered image corresponds to at least one virtual location in the virtual environment, the at least one virtual location corresponds to at least one real location in the real environment, and the real presentation device captures the at least one filtered image at the at least one real location; and
and calculating the initial virtual position according to the at least one screened image and the second image.
3. The method of claim 2, wherein the step of comparing the second image with the plurality of first images to obtain the plurality of correlation coefficients between the plurality of first images and the second image comprises:
obtaining a plurality of first quantization vectors corresponding to the plurality of first images;
obtaining a second quantization vector corresponding to the second image; and
calculating the plurality of correlation coefficients between the plurality of first quantized vectors and the second quantized vector.
4. The method of claim 3, wherein obtaining the plurality of first quantized vectors corresponding to the plurality of first images comprises:
performing a feature extraction operation on the plurality of first images to obtain a plurality of first quantization vectors;
wherein a quantization vector in the plurality of first images corresponding to a first image indicates a plurality of occurrences in the first image corresponding to a plurality of features.
5. The method of claim 3, wherein obtaining the second quantized vector corresponding to the second image comprises:
performing a feature extraction operation on the second image to obtain the second quantization vector;
wherein the second quantization vector indicates a plurality of occurrences in the second image corresponding to a plurality of features.
6. The method of claim 2, wherein calculating the initial virtual position in the virtual environment corresponding to the second image based on the at least one filtered image and the second image comprises:
calculating a relative virtual position relative to a virtual position in the at least one virtual position according to the at least one screened image and the second image; and
calculating the initial virtual position corresponding to the second image according to the virtual position and the relative virtual position.
7. A reality presentation device that displays a virtual environment to a user, the virtual environment being constructed from a plurality of first images captured from a real environment, the reality presentation device comprising:
an image capturing device for capturing a second image from the real environment;
a processing unit for performing the steps of:
constructing the virtual environment corresponding to the real environment according to the plurality of first images; and
calculating an initial virtual position in the virtual environment corresponding to the second image according to the plurality of first images and the second image; and
a display screen for displaying the virtual environment from a perspective of the initial virtual position when a particular application of the reality presentation device is launched;
wherein the initial virtual position corresponds to an initial real position in the real environment, and the reality presentation device captures the second image at the initial real position.
8. The reality presentation device of claim 7, wherein the processing unit is further configured to perform the steps of:
comparing the second image with the plurality of first images to obtain a plurality of correlation coefficients between the plurality of first images and the second image; and
obtaining at least one filtered image from the plurality of first images such that a correlation coefficient corresponding to the at least one filtered image is greater than a specific threshold, wherein the at least one filtered image corresponds to at least one virtual location in the virtual environment, the at least one virtual location corresponds to at least one real location in the real environment, and the real presentation device captures the at least one filtered image at the at least one real location; and
and calculating the initial virtual position according to the at least one screened image and the second image.
9. The reality presentation device of claim 8, wherein the processing unit is further configured to compare the second image with the plurality of first images to obtain the plurality of correlation coefficients between the plurality of first images and the second image:
obtaining a plurality of first quantization vectors corresponding to the plurality of first images;
obtaining a second quantization vector corresponding to the second image; and
calculating the plurality of correlation coefficients between the plurality of first quantized vectors and the second quantized vector.
10. The reality presentation device of claim 9, wherein the processing unit is further configured to obtain the plurality of first quantized vectors corresponding to the plurality of first images by:
performing a feature extraction operation on the plurality of first images to obtain a plurality of first quantization vectors;
wherein a quantization vector in the plurality of first images corresponding to a first image indicates a plurality of occurrences in the first image corresponding to a plurality of features.
11. The reality presentation device of claim 9, wherein the processing unit is further configured to obtain the second quantized vector corresponding to the second image by:
performing a feature extraction operation on the second image to obtain the second quantization vector;
wherein the second quantization vector indicates a plurality of occurrences in the second image corresponding to a plurality of features.
12. The reality presentation device of claim 8, wherein the processing unit is further configured to perform the following steps to calculate the initial virtual position in the virtual environment corresponding to the second image according to the at least one filtered image and the second image:
calculating a relative virtual position relative to a virtual position in the at least one virtual position according to the at least one screened image and the second image; and
calculating the initial virtual position corresponding to the second image according to the virtual position and the relative virtual position.
13. A system for displaying a virtual environment to a user, the virtual environment being constructed from a plurality of first images captured from a real environment, comprising:
a reality presentation device comprising:
an image capturing device for capturing a second image from the real environment; and
a display screen for displaying the virtual environment from a perspective of an initial virtual position when a particular application of the reality presentation device is launched; and
a remote computing device configured to perform the steps of:
constructing the virtual environment corresponding to the real environment according to the plurality of first images; and
calculating the initial virtual position corresponding to the second image in the virtual environment according to the plurality of first images and the second image;
wherein the initial virtual position corresponds to an initial real position in the real environment, and the reality presentation device captures the second image at the initial real position.
14. The system of claim 13, wherein the remote computing device is further configured to perform the steps of:
comparing the second image with the plurality of first images to obtain a plurality of correlation coefficients between the plurality of first images and the second image;
obtaining at least one filtered image from the plurality of first images such that a correlation coefficient corresponding to the at least one filtered image is greater than a specific threshold, wherein the at least one filtered image corresponds to at least one virtual location in the virtual environment, the at least one virtual location corresponds to at least one real location in the real environment, and the real presentation device captures the at least one filtered image at the at least one real location; and
and calculating the initial virtual position according to the at least one screened image and the second image.
15. The system of claim 14, wherein the remote computing device is further configured to perform the following steps to compare the second image with the plurality of first images to obtain the plurality of correlation coefficients between the plurality of first images and the second image:
obtaining a plurality of first quantization vectors corresponding to the plurality of first images;
obtaining a second quantization vector corresponding to the second image; and
calculating the plurality of correlation coefficients between the plurality of first quantized vectors and the second quantized vector.
16. The system of claim 15, wherein the remote computing device is further configured to perform the following steps to obtain the plurality of first quantized vectors corresponding to the plurality of first images:
performing a feature extraction operation on the plurality of first images to obtain a plurality of first quantization vectors;
wherein a quantization vector in the plurality of first images corresponding to a first image indicates a plurality of occurrences in the first image corresponding to a plurality of features.
17. The system of claim 15, wherein the remote computing device is further configured to perform the following to obtain the second quantized vector corresponding to the second image:
performing a feature extraction operation on the second image to obtain the second quantization vector;
wherein the second quantization vector indicates a plurality of occurrences in the second image corresponding to a plurality of features.
18. The system of claim 14, wherein the remote computing device is further configured to perform the following steps to calculate the initial virtual location in the virtual environment corresponding to the second image based on the at least one filtered image and the second image:
calculating a relative virtual position relative to a virtual position in the at least one virtual position according to the at least one screened image and the second image; and
calculating the initial virtual position corresponding to the second image according to the virtual position and the relative virtual position.
CN201811612618.8A 2018-12-24 2018-12-24 Positioning method and reality presentation device Pending CN111354087A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811612618.8A CN111354087A (en) 2018-12-24 2018-12-24 Positioning method and reality presentation device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811612618.8A CN111354087A (en) 2018-12-24 2018-12-24 Positioning method and reality presentation device

Publications (1)

Publication Number Publication Date
CN111354087A true CN111354087A (en) 2020-06-30

Family

ID=71196846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811612618.8A Pending CN111354087A (en) 2018-12-24 2018-12-24 Positioning method and reality presentation device

Country Status (1)

Country Link
CN (1) CN111354087A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050190972A1 (en) * 2004-02-11 2005-09-01 Thomas Graham A. System and method for position determination
CN101174332A (en) * 2007-10-29 2008-05-07 张建中 Method, device and system for interactively combining real-time scene in real world with virtual reality scene
US20090135178A1 (en) * 2007-11-22 2009-05-28 Toru Aihara Method and system for constructing virtual space
US20130187905A1 (en) * 2011-12-01 2013-07-25 Qualcomm Incorporated Methods and systems for capturing and moving 3d models and true-scale metadata of real world objects
US20180316877A1 (en) * 2017-05-01 2018-11-01 Sensormatic Electronics, LLC Video Display System for Video Surveillance

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050190972A1 (en) * 2004-02-11 2005-09-01 Thomas Graham A. System and method for position determination
CN101174332A (en) * 2007-10-29 2008-05-07 张建中 Method, device and system for interactively combining real-time scene in real world with virtual reality scene
US20090135178A1 (en) * 2007-11-22 2009-05-28 Toru Aihara Method and system for constructing virtual space
US20130187905A1 (en) * 2011-12-01 2013-07-25 Qualcomm Incorporated Methods and systems for capturing and moving 3d models and true-scale metadata of real world objects
US20180316877A1 (en) * 2017-05-01 2018-11-01 Sensormatic Electronics, LLC Video Display System for Video Surveillance

Similar Documents

Publication Publication Date Title
JP6644833B2 (en) System and method for rendering augmented reality content with albedo model
US11748934B2 (en) Three-dimensional expression base generation method and apparatus, speech interaction method and apparatus, and medium
CN103140879B (en) Information presentation device, digital camera, head mounted display, projecting apparatus, information demonstrating method and information are presented program
CN110889890A (en) Image processing method and device, processor, electronic device and storage medium
CN108629830A (en) A kind of three-dimensional environment method for information display and equipment
CN109840946B (en) Virtual object display method and device
CN111694430A (en) AR scene picture presentation method and device, electronic equipment and storage medium
KR20170134513A (en) How to Display an Object
CN109144252B (en) Object determination method, device, equipment and storage medium
CN110866977A (en) Augmented reality processing method, device and system, storage medium and electronic equipment
WO2020061432A1 (en) Markerless human movement tracking in virtual simulation
CN110648274B (en) Method and device for generating fisheye image
CN111679742A (en) Interaction control method and device based on AR, electronic equipment and storage medium
US11170246B2 (en) Recognition processing device, recognition processing method, and program
US10602117B1 (en) Tool for onsite augmentation of past events
CN116012564B (en) Equipment and method for intelligent fusion of three-dimensional model and live-action photo
CN112085835A (en) Three-dimensional cartoon face generation method and device, electronic equipment and storage medium
CN111508033A (en) Camera parameter determination method, image processing method, storage medium, and electronic apparatus
CN112905014A (en) Interaction method and device in AR scene, electronic equipment and storage medium
Bellarbi et al. A 3d interaction technique for selection and manipulation distant objects in augmented reality
JP6555755B2 (en) Image processing apparatus, image processing method, and image processing program
JP6799468B2 (en) Image processing equipment, image processing methods and computer programs
Rematas Watching sports in augmented reality
US20200184675A1 (en) Positioning Method and Reality Presenting Device
CN111354087A (en) Positioning method and reality presentation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200630