CN111192365A - Virtual scene positioning method based on ARkit and two-dimensional code - Google Patents
Virtual scene positioning method based on ARkit and two-dimensional code Download PDFInfo
- Publication number
- CN111192365A CN111192365A CN201911369087.9A CN201911369087A CN111192365A CN 111192365 A CN111192365 A CN 111192365A CN 201911369087 A CN201911369087 A CN 201911369087A CN 111192365 A CN111192365 A CN 111192365A
- Authority
- CN
- China
- Prior art keywords
- dimensional code
- virtual scene
- arkit
- scene
- obtaining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K17/00—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
- G06K17/0022—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
- G06K17/0025—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement consisting of a wireless interrogation device in combination with a device for optically marking the record carrier
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a virtual scene positioning method based on an ARkit and a two-dimensional code, which is characterized in that according to a screenshot of a camera, color data of a two-dimensional code area are decoded by using a ZXing library to obtain screen coordinates of three positioning points of the two-dimensional code and data information of two-dimensional code mounting; the ARkit technology is used for obtaining a current code scanning plane, and three rays are respectively emitted to three characteristic points from the position of a camera. The method can efficiently and flexibly position the virtual scene in the real world based on the ARkit and the two-dimensional code technology, decode the color data of the two-dimensional code by obtaining the two-dimensional code data, recalculate the two-dimensional code data by the ARkit technology to obtain three characteristic points of the two-dimensional code in the virtual scene, position the position of the two-dimensional code in the virtual scene by rotating and calculating the coordinates of the two-dimensional code, and combine the two-dimensional code with the real world according to reverse rotation and calculation, so that the AR scene can be combined with the real world more closely, and the position of the virtual scene in the real world can be accurately positioned.
Description
Technical Field
The invention relates to the technical field of internet, in particular to a virtual scene positioning method based on an ARkit and a two-dimensional code.
Background
Currently, people live in a society with developed internet information, along with the rapid development of AR technology, how to project a virtual world to a real world flexibly and accurately in a positioning manner is very important; with the rapid development of the internet and the mobile internet, the use requirements of users are more and more critical, and the user experience requirements on application software are higher and higher; the existing virtual scene simulation positioning method is difficult to flexibly and efficiently anchor the virtual scene to the real world, and the usability of the whole application is not stable enough.
At this time, a method for anchoring a flexible and efficient virtual scene to the real world is needed to improve the usability of the whole application.
Disclosure of Invention
Aiming at the defects in the prior art, the invention designs a virtual scene positioning method based on an ARkit and a two-dimensional code; the virtual scene can be flexibly and efficiently anchored in the real world, and the usability of the whole application is improved.
The technical scheme is as follows: the technical scheme adopted by the invention for solving the problems is as follows: a virtual scene positioning method based on an ARkit and a two-dimensional code comprises the following steps:
1) according to the screenshot of the camera, decoding the color data of the two-dimensional code area by using a ZXing library to obtain the screen coordinates of three positioning points of the two-dimensional code and the data information of the two-dimensional code mounting;
2) acquiring a current code scanning plane by using an ARkit technology, respectively transmitting three rays from the position of a camera to three characteristic points, and calculating the intersection positions of the three rays and the code scanning plane, namely the positions of the three characteristic points of the two-dimensional code of the real world in an ARkit coordinate system;
3) calculating according to the coordinates of the three feature points in the step 2) to obtain the central position and the rotating position of the two-dimensional code in the ARkit coordinate system;
4) requesting the two-dimensional code from a server according to the data information of the two-dimensional code to obtain the position, the angle and the zoom of the two-dimensional code in the virtual scene, and subtracting the position of the two-dimensional code in the virtual scene according to the position of the two-dimensional code in the ARkit coordinate system in the step 3) to obtain the position of the virtual scene coordinate origin in the ARkit coordinate system; the rotation angle of the virtual scene coordinate origin under the ARkit coordinate system can be obtained by reversely rotating the angle of the two-dimensional code under the ARkit coordinate system under the virtual scene; scaling can be calculated in the same way;
5) the aim of flexibly combining the virtual scene with the real world is fulfilled by dynamically configuring the scene model information of the two-dimensional code corresponding to the server and the position information of the two-dimensional code in the virtual scene.
The specific implementation method in the step 1) comprises the following steps: calculating a screen area Rect of a frame according to a code scanning frame of a camera screen UI, rounding 32 upwards, reading Color data in the Rect range into Color32[ ] array colors, converting RGB coded colors into YUV coded byte [ ] buffers, and performing decoding operation by using a Zxing library multiformat reader class decoding WithState method to obtain a Zxing library Result class object, wherein a feature point is in a Result points attribute array, and character string data stored in a two-dimensional code is in a Text attribute.
The specific implementation method in the step 2) is as follows: obtaining a screen coordinate pts of the two-dimensional code feature point by adding the minimum coordinate of Rect to result points, obtaining three Rays Rays by a screen ray method of a camera, obtaining a series of planes by using ARInterface of ARkit, obtaining a two-dimensional code pasting direction (horizontal or vertical) according to character string data result obtained in the step 1) to obtain an optimal plane data model, calculating an intersection point of Rays and plane, and obtaining the actual position of the three feature points of the two-dimensional code, namely, a control Point position.
The specific implementation method in the step 3) is as follows: setting indexes of the lower left corner, the upper left corner and the upper right corner of the three feature points of the two-dimensional code obtained in the step 2) as 0, 1 and 2 respectively; calculating a center point coordinate center of the two-dimensional code according to the average value of the lower left corner and the upper right corner, setting the upper space direction of the two-dimensional code as vup by default, calculating the front direction vforward from the center of the two-dimensional code to the top center, obtaining the upper space direction vup by cross multiplication of (controlPoints [0] -center) and (controlPoints [1] -center), and obtaining a quaternion of rotation of the two-dimensional code through Quaterion. And then, acquiring an actual position according to the field pasting two-dimensional code quantity, estimating the coordinates of the two-dimensional code virtual scene, and configuring the coordinates to a server.
The specific implementation method in the step 4) is as follows: acquiring a scene ID and a two-dimensional code MaID of a virtual scene from data in result. And pulling scene data according to the scene ID to load scene rendering to a screen.
The specific implementation method in the step 5) is as follows: and according to the actual effect of combining the AR scene and the real world, the position of the virtual scene of the two-dimensional code stored by the server is finely adjusted, so that the AR scene and the real world are combined closely.
Has the advantages that: compared with the prior art, the invention has the following advantages: the method can be used for efficiently and flexibly positioning the virtual scene in the real world based on the ARkit and the two-dimensional code technology, mainly comprises the steps of obtaining the two-dimensional code data, decoding the color data of the two-dimensional code, recalculating the two-dimensional code data through the ARkit technology to obtain three characteristic points of the two-dimensional code in the virtual scene, positioning the position of the two-dimensional code in the virtual scene through a method of rotating and calculating coordinates of the two-dimensional code, and combining the two-dimensional code with the real world according to operations such as reverse rotation and calculation, so that the AR scene can be combined with the real world more closely, and the position of the virtual scene in the real world can be accurately positioned.
Drawings
FIG. 1 is a UI diagram of a code scanning interface of the present invention;
fig. 2 is a basic structure diagram of a two-dimensional code in the invention.
Detailed Description
The present invention will be further illustrated with reference to the accompanying drawings and specific examples, which are carried out on the premise of the technical solution of the present invention, and it should be understood that these examples are only for illustrating the present invention and are not intended to limit the scope of the present invention.
Example 1
This embodiment 1 provides a virtual scene positioning method based on an ARkit and a two-dimensional code, including the following steps:
step 1): calculating a screen area Rect of a screen UI (user interface) according to the frame and rounding 32 upwards, reading Color data in the Rect range into Color32[ ] array colors, converting RGB (red, green, blue) coded colors into YUV (blue, green, blue) coded bytes, and performing decoding operation by using a Zxing library multiformat reader type decoding WithState method to obtain a Zxing library Result class object, wherein a feature point is in a Result points attribute array, and character string data stored in a two-dimensional code is in a Text attribute;
step 2): obtaining a screen coordinate pts of the two-dimensional code feature point by adding the minimum coordinate of Rect to result points, obtaining three Rays Rays by a screen ray method of a camera, obtaining a series of planes by using ARInterface of ARkit, obtaining a two-dimensional code pasting direction (horizontal or vertical) according to character string data result obtained in the step 1) to obtain an optimal plane data model, calculating an intersection point of Rays and plane, and obtaining the actual position of the three feature points of the two-dimensional code, namely, a control Point position.
Step 3): three feature points of the two-dimensional code are shown in fig. 2, and indexes of a lower left corner, an upper left corner and an upper right corner of the two-dimensional code are respectively set to be 0, 1 and 2; calculating a center point coordinate center of the two-dimensional code according to the average value of the lower left corner and the upper right corner, setting the upper space direction of the two-dimensional code as vup by default, calculating the front direction vforward from the center of the two-dimensional code to the top center, obtaining the upper space direction vup by cross multiplication of (controlPoints [0] -center) and (controlPoints [1] -center), and obtaining a quaternion of rotation of the two-dimensional code through Quaterion. Then, acquiring an actual position according to the amount of the two-dimension code pasted on the spot, estimating a two-dimension code virtual scene coordinate, and configuring the two-dimension code virtual scene coordinate to a server;
step 4): acquiring a scene ID and a two-dimensional code MaID of a virtual scene from data in result. Pulling scene data according to the scene ID, loading the scene data and rendering the scene data to a screen;
step 5): and according to the actual effect of combining the AR scene and the real world, the position of the virtual scene of the two-dimensional code stored by the server is finely adjusted, so that the AR scene and the real world are combined closely.
According to the method, the position and the direction of the two-dimensional code under an ARKit coordinate system (coordinate system constructed by the ARkit in the real world) are calculated according to the characteristic points of the two-dimensional code, the position and the direction of the two-dimensional code in a virtual scene are obtained through text information of the two-dimensional code, and the virtual scene is anchored to the real world.
The method can be used for efficiently and flexibly positioning the virtual scene in the real world based on the ARkit and the two-dimensional code technology, mainly comprises the steps of obtaining the two-dimensional code data, decoding the color data of the two-dimensional code, recalculating the two-dimensional code data through the ARkit technology to obtain three characteristic points of the two-dimensional code in the virtual scene, positioning the position of the two-dimensional code in the virtual scene through a method of rotating and calculating coordinates of the two-dimensional code, and combining the two-dimensional code with the real world according to operations such as reverse rotation and calculation, so that the AR scene can be combined with the real world more closely, and the position of the virtual scene in the real world can be accurately positioned.
The above-mentioned embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the invention and the appended claims, and all equivalent changes and modifications made within the spirit and scope of the invention as claimed should be included in the appended claims.
Claims (6)
1. A virtual scene positioning method based on an ARkit and a two-dimensional code is characterized in that: the method comprises the following steps:
1) according to the screenshot of the camera, decoding the color data of the two-dimensional code area by using a ZXing library to obtain the screen coordinates of three positioning points of the two-dimensional code and the data information of the two-dimensional code mounting;
2) acquiring a current code scanning plane by using an ARkit technology, respectively transmitting three rays from the position of a camera to three characteristic points, and calculating the intersection positions of the three rays and the code scanning plane, namely the positions of the three characteristic points of the two-dimensional code of the real world in an ARkit coordinate system;
3) calculating according to the coordinates of the three feature points in the step 2) to obtain the central position and the rotating position of the two-dimensional code in the ARkit coordinate system;
4) requesting the two-dimensional code from a server according to the data information of the two-dimensional code to obtain the position, the angle and the zoom of the two-dimensional code in the virtual scene, and subtracting the position of the two-dimensional code in the virtual scene according to the position of the two-dimensional code in the ARkit coordinate system in the step 3) to obtain the position of the virtual scene coordinate origin in the ARkit coordinate system; the rotation angle of the virtual scene coordinate origin under the ARkit coordinate system can be obtained by reversely rotating the angle of the two-dimensional code under the ARkit coordinate system under the virtual scene; scaling can be calculated in the same way;
5) the aim of flexibly combining the virtual scene with the real world is fulfilled by dynamically configuring the scene model information of the two-dimensional code corresponding to the server and the position information of the two-dimensional code in the virtual scene.
2. The virtual scene positioning method based on the ARkit and the two-dimensional code as claimed in claim 1, wherein: the specific implementation method in the step 1) is as follows: calculating a screen area Rect of a frame according to a code scanning frame of a camera screen UI, rounding 32 upwards, reading Color data in the Rect range into Color32[ ] array colors, converting RGB coded colors into YUV coded byte [ ] buffers, and performing decoding operation by using a Zxing library multiformat reader class decoding WithState method to obtain a Zxing library Result class object, wherein a feature point is in a Result points attribute array, and character string data stored in a two-dimensional code is in a Text attribute.
3. The virtual scene positioning method based on the ARkit and the two-dimensional code as claimed in claim 2, wherein: the specific implementation method in the step 2) is as follows: obtaining a screen coordinate pts of the two-dimensional code feature point by adding the minimum coordinate of Rect to result points, obtaining three Rays by a camera screen ray method, obtaining a series of planes by using ARInterface of ARkit, obtaining a two-dimensional code pasting direction (horizontal or vertical) according to the character string data result obtained in the step 1) to obtain an optimal plane data model, calculating the intersection point of Rays and the plane, and obtaining the actual position of the three feature points of the two-dimensional code, namely the control points position.
4. The virtual scene positioning method based on the ARkit and the two-dimensional code as claimed in claim 3, wherein: the specific implementation method in the step 3) is as follows: setting indexes of the lower left corner, the upper left corner and the upper right corner of the three feature points of the two-dimensional code obtained in the step 2) as 0, 1 and 2 respectively; calculating a center point coordinate center of the two-dimensional code according to the average value of the lower left corner and the upper right corner, setting the upper space direction of the two-dimensional code as vup by default, calculating the front direction vforward from the center of the two-dimensional code to the top center, obtaining the upper space direction vup by cross multiplication of (controlPoints [0] -center) and (controlPoints [1] -center), and obtaining a quaternion of rotation of the two-dimensional code through Quaterion. And then, acquiring an actual position according to the field pasting two-dimensional code quantity, estimating the coordinates of the two-dimensional code virtual scene, and configuring the coordinates to a server.
5. The virtual scene positioning method based on the ARkit and the two-dimensional code as claimed in claim 4, wherein: the specific implementation method in the step 4) is as follows: obtaining a scene ID and a two-dimensional code MaID of the virtual scene according to the data in the result in the step 2), inquiring a server through the MaID to obtain a position and a rotation of the two-dimensional code in the virtual scene, and obtaining the position and the rotation of the center of the coordinate system of the virtual scene in the ARkit coordinate system by reversely moving and rotating the position of the two-dimensional code in the ARkit coordinate system; and pulling scene data according to the scene ID to load scene rendering to a screen.
6. The virtual scene positioning method based on the ARkit and the two-dimensional code as claimed in claim 5, wherein: the specific implementation method in the step 5) is as follows: and according to the actual effect of combining the AR scene and the real world, the position of the virtual scene of the two-dimensional code stored by the server is finely adjusted, so that the AR scene and the real world are combined closely.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911369087.9A CN111192365A (en) | 2019-12-26 | 2019-12-26 | Virtual scene positioning method based on ARkit and two-dimensional code |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911369087.9A CN111192365A (en) | 2019-12-26 | 2019-12-26 | Virtual scene positioning method based on ARkit and two-dimensional code |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111192365A true CN111192365A (en) | 2020-05-22 |
Family
ID=70709576
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911369087.9A Withdrawn CN111192365A (en) | 2019-12-26 | 2019-12-26 | Virtual scene positioning method based on ARkit and two-dimensional code |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111192365A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113680059A (en) * | 2021-08-31 | 2021-11-23 | 中科锐新(北京)科技有限公司 | Outdoor scene AR game positioning device and method |
-
2019
- 2019-12-26 CN CN201911369087.9A patent/CN111192365A/en not_active Withdrawn
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113680059A (en) * | 2021-08-31 | 2021-11-23 | 中科锐新(北京)科技有限公司 | Outdoor scene AR game positioning device and method |
CN113680059B (en) * | 2021-08-31 | 2024-05-14 | 中科锐新(北京)科技有限公司 | Outdoor scene AR game positioning device and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111145090B (en) | Point cloud attribute coding method, point cloud attribute decoding method, point cloud attribute coding equipment and point cloud attribute decoding equipment | |
US10110936B2 (en) | Web-based live broadcast | |
CN106030503B (en) | Adaptive video processing | |
CN109783178B (en) | Color adjusting method, device, equipment and medium for interface component | |
CN111193876B (en) | Method and device for adding special effect in video | |
CN111899155B (en) | Video processing method, device, computer equipment and storage medium | |
KR101658239B1 (en) | Method and apparatus for generating of animation message | |
US10692249B2 (en) | Octree traversal for anchor point cloud compression | |
US9214138B2 (en) | Redundant pixel mitigation | |
US20210406305A1 (en) | Image deformation control method and device and hardware device | |
US8581924B2 (en) | Method and mobile terminal for enabling animation during screen-switching | |
CN111080806B (en) | Mapping processing method and device, electronic equipment and storage medium | |
CN112714357B (en) | Video playing method, video playing device, electronic equipment and storage medium | |
CN112891946B (en) | Game scene generation method and device, readable storage medium and electronic equipment | |
WO2023066054A1 (en) | Image processing method, cloud server, vr terminal and storage medium | |
CN107767437B (en) | Multilayer mixed asynchronous rendering method | |
US10997795B2 (en) | Method and apparatus for processing three dimensional object image using point cloud data | |
CN110782387A (en) | Image processing method and device, image processor and electronic equipment | |
CN111192365A (en) | Virtual scene positioning method based on ARkit and two-dimensional code | |
CN114827718A (en) | Method and device for self-adaptive alignment display of real-time video screen-combining characters in cloud conference | |
CN110049347B (en) | Method, system, terminal and device for configuring images on live interface | |
WO2020037754A1 (en) | Method and device for enhancing image quality of video | |
KR102225607B1 (en) | System for cloud streaming service, method of cloud streaming service using still image compression technique and apparatus for the same | |
CN111462343B (en) | Data processing method and device, electronic equipment and storage medium | |
WO2021217428A1 (en) | Image processing method and apparatus, photographic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200522 |
|
WW01 | Invention patent application withdrawn after publication |