CN115761190A - Multi-user augmented reality photo browsing method and system based on scene mapping - Google Patents

Multi-user augmented reality photo browsing method and system based on scene mapping Download PDF

Info

Publication number
CN115761190A
CN115761190A CN202211487249.0A CN202211487249A CN115761190A CN 115761190 A CN115761190 A CN 115761190A CN 202211487249 A CN202211487249 A CN 202211487249A CN 115761190 A CN115761190 A CN 115761190A
Authority
CN
China
Prior art keywords
augmented reality
information
equipment
scene
mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211487249.0A
Other languages
Chinese (zh)
Inventor
彭慧
黄金成
李涛勇
王柯圆
彭小峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202211487249.0A priority Critical patent/CN115761190A/en
Publication of CN115761190A publication Critical patent/CN115761190A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a multi-user augmented reality photo browsing method and a system based on scene mapping, wherein the method comprises the following steps: acquiring a mixed reality image and establishing an augmented reality photo album; based on AR equipment, carrying out scene mapping processing on photos in the augmented reality photo album to construct a virtual scene; and displaying the photos and the virtual scenes in the augmented reality photo album on the AR equipment. By using the method and the device, the augmented reality photo can be mapped according to the focal length of the camera of the AR equipment and the coordinate transformation matrix information, and the problems that different AR equipment are easy to generate mismatching of virtual model positions and the like during multi-user interaction are avoided. The method and the system for browsing the multi-user augmented reality photo based on the scene mapping can be widely applied to the technical field of augmented reality.

Description

Multi-user augmented reality photo browsing method and system based on scene mapping
Technical Field
The invention relates to the technical field of augmented reality, in particular to a multi-user augmented reality photo browsing method and system based on scene mapping.
Background
Augmented Reality (AR), which is a technology that a computer fuses and superimposes a generated virtual image or information with a real scene captured by a camera in real time after operation and interacts with the real scene, applies virtual information to the real world in augmented reality and is perceived by human senses, so that sense experience beyond reality is achieved, and with the development of the augmented reality technology, more and more AR devices can shoot augmented reality pictures so that a user can store interesting scene photos of the real scene and the virtual scene; however, the existing augmented reality photo storage function and photo browsing mode are relatively single, and can only browse again in the original AR device for shooting the augmented reality photo, which is not beneficial to the user to restore the interactive experience in the augmented reality photo, and the current augmented reality photo is difficult to achieve the same viewing effect on different AR devices, and is not beneficial to a plurality of users to view simultaneously. The information that has not only preserved real object among the augmented reality photo has still preserved virtual object's information, and current technical need restores augmented reality scene according to many images, and the single image that the user shot can not restore all scenes of shooing, is unfavorable for the user to use single photo record mixed reality content, and current augmented reality photo is difficult to reach the same effect of vwatching on different AR equipment, is unfavorable for a plurality of users to vwatch simultaneously.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide a multi-user augmented reality photo browsing method and system based on scene mapping, which can map an augmented reality photo according to the focal length of a camera of an AR device and coordinate transformation matrix information, and avoid the problems that different AR devices are easy to generate position mismatching of virtual models during multi-user interaction and the like.
The first technical scheme adopted by the invention is as follows: the multi-user augmented reality photo browsing method based on scene mapping comprises the following steps:
acquiring a mixed reality image and establishing an augmented reality photo album;
based on AR equipment, carrying out scene mapping processing on photos in the augmented reality photo album to construct a virtual scene;
and displaying the photos and the virtual scene in the augmented reality photo album on the AR equipment.
Further, the step of acquiring the mixed reality image and establishing the augmented reality album specifically includes:
shooting a real object to be shot through a camera of the AR equipment to obtain a mixed reality image;
obtaining key node information of the mixed reality image through AR equipment;
performing file conversion processing on the format of the mixed reality image to obtain an augmented reality image;
and connecting the augmented reality image with the corresponding key node information to establish an augmented reality photo album.
Further, the key node information includes spatial grid information, pose information of an AR device camera, virtual object information, and shooting time information, and the step of obtaining the key node information of the mixed reality image by the AR device specifically includes:
constructing spatial grid information matched with a real scene based on a depth sensor of the AR equipment and SLAM motion tracking;
calculating attitude information of an AR equipment camera based on a gyroscope and an accelerometer of the AR equipment, wherein the attitude information of the AR equipment camera comprises a coordinate transformation matrix and internal reference matrix information of a mixed reality image;
based on the AR device, virtual object information and shooting time information of the mixed reality image are obtained.
Further, the step of performing scene mapping processing on the photos in the augmented reality photo album based on the AR device to construct a virtual scene specifically includes:
based on the AR equipment, the user selects a required mapping mode;
based on the mapping mode selected by the user, restoring the two-dimensional pixel coordinates of the photos in the augmented reality photo album to obtain three-dimensional coordinate data of the virtual object information;
and positioning the position information of the photos in the augmented reality photo album under the AR equipment according to the three-dimensional coordinate data of the virtual object information, and constructing a virtual scene.
Further, based on the AR device, the user selects a required mapping manner, which specifically includes an original mapping and a self-mapping, where:
the original mapping is used for restoring the virtual object information of the photos in the augmented reality photo album to the original photographing scene for the user without changing the position of the virtual object in the original real scene;
the self-mapping is used for restoring the virtual object information of the photos in the augmented reality photo album to the space position of the AR equipment for the user, and the positions of all the virtual objects in the current space are updated to the existing real environment.
Further, the step of reducing the two-dimensional pixel coordinates of the photos in the augmented reality album based on the mapping mode selected by the user to obtain the three-dimensional coordinate data of the virtual object information specifically includes:
calculating to obtain the shooting position coordinate information of the photo in the augmented reality photo album and the focal length information of the AR equipment camera based on the mapping mode selected by the user and according to the attitude information of the AR equipment camera;
performing linear transformation calculation on the two-dimensional pixel coordinates of the virtual object information of the photo in the augmented reality photo album to obtain a proportionality coefficient;
determining a real projection direction corresponding to the two-dimensional pixel coordinate of the virtual object information of the photo in the augmented reality photo album according to the proportion coefficient, the shooting position coordinate information of the photo in the augmented reality photo album and the focal length information of the AR equipment camera;
and combining the space grid information which is matched with the real scene and corresponds to the real projection direction of the two-dimensional pixel coordinates of the virtual object information of the photo in the augmented reality photo album to construct the three-dimensional coordinate data of the virtual object information.
Further, the method further comprises the steps of uploading photos and virtual scenes in the augmented reality photo album to a cloud end, and establishing a sharing connection relation among different AR devices.
The second technical scheme adopted by the invention is as follows: multi-user augmented reality photo browsing system based on scene mapping comprises:
the shooting module is used for acquiring a mixed reality image and establishing an augmented reality photo album;
the mapping module is used for carrying out scene mapping processing on the photos in the augmented reality photo album based on the AR equipment to construct a virtual scene;
and the display module is used for displaying the photos and the virtual scenes in the augmented reality photo album on the AR equipment.
The method and the system have the beneficial effects that: the method and the system construct the augmented reality photo album through the augmented reality image to complete scene mapping, restore the virtual information of the augmented reality photo to the display scene, further ensure that the virtual object information displayed on all AR equipment is located at the same position by acquiring the information of the focal length, the coordinate transformation matrix and the like of the AR equipment camera of the augmented reality photo, encode and upload the augmented reality photo and the corresponding virtual scene information thereof to the cloud server, and establish a sharing connection relation among different AR equipment, thereby avoiding the problems that the positions of virtual models are not matched and the like easily generated when different AR equipment is interacted by multiple users.
Drawings
FIG. 1 is a flowchart illustrating steps of a multi-user augmented reality photo browsing method based on scene mapping according to the present invention;
FIG. 2 is a block diagram of the multi-user augmented reality photo browsing system based on scene mapping according to the present invention;
fig. 3 is a schematic flow chart of image uploading of an AR device in an embodiment of the present invention;
fig. 4 is a schematic flow chart of sending an augmented reality picture to an AR device in the embodiment of the present invention;
FIG. 5 is a schematic diagram of a specific process for solving three-dimensional coordinates of a virtual object in a physical space according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating an original mapping of the multi-user augmented reality photo browsing method based on scene mapping according to the present invention;
FIG. 7 is a diagram illustrating a self-mapping of the multi-user augmented reality photo browsing method based on scene mapping according to the present invention;
reference numerals: 1. a luminaire in a real scene; 2. sofas in real-world scenarios; 3. a vase in a virtual scene; 4. a person in a real scene; 5. ice cream in virtual scenes; 6. crown in virtual scene.
Detailed Description
The invention is described in further detail below with reference to the figures and the specific embodiments. The step numbers in the following embodiments are provided only for convenience of illustration, the order between the steps is not limited at all, and the execution order of each step in the embodiments can be adapted according to the understanding of those skilled in the art.
Referring to fig. 1, the invention provides a multi-user augmented reality photo browsing method based on scene mapping, which comprises the following steps:
s1, acquiring an augmented reality image based on AR equipment;
specifically, each time the AR device is started, the current spatial scene needs to be rescanned, so that spatial position information of the AR device needs to be calculated by using a posture of a depth sensor and a camera of the AR device, the AR device matches a point in the real world with a pixel point of each frame of a picture of the camera, a pose of the camera is obtained by using two inertial measurement units of an accelerometer and a gyroscope, a moving process of the AR device is tracked by a vision system and is recorded for one time, after the moving is finished, recording is performed for the second time, scene changes in each frame are recorded in the moving process of the AR device, a spatial grid can be determined by a distance of the scene changes, depth information of a real object in the scene from the AR device is obtained according to the depth sensor, further spatial position information is obtained, the AR device can add any virtual article information in the current scene, the virtual article information and objects in the real scene jointly form an image displayed by the AR device, the AR device calls a main camera to capture the current image frame and records a coordinate transformation matrix and internal parameter matrix information of the current image;
the AR equipment calls the main camera to capture the current image frame and stores and records a coordinate transformation matrix and an internal reference matrix of the current image frame as shown in the following formula:
Figure BDA0003963034920000041
Figure BDA0003963034920000042
in the above formula, K 1 Representing a coordinate transformation matrix, K 2 Representing an internal reference matrix;
the camera comprises a camera, an internal reference matrix and a parameter acquisition unit, wherein the internal reference matrix of the camera records parameters of the camera, namely the focal length of the camera, for converting three-dimensional space information into a two-dimensional image; the coordinate transformation matrix stores the camera position information at the shooting moment; the internal reference matrix is obtained by parameters when the camera shoots, and the coordinate transformation matrix is obtained by the inertial sensor and has the function of restoring virtual object information in different devices.
S2, uploading the augmented reality image acquired by the AR equipment and information corresponding to the augmented reality image to a cloud server and establishing an augmented reality photo album;
specifically, referring to fig. 3, the information corresponding to the augmented reality image includes virtual object resources, a coordinate transformation matrix of the augmented reality image, an internal reference matrix and spatial grid information in the augmented reality image acquired by the AR device, and in the process of uploading the image to the cloud server, a network packet loss may occur, so that a coding protocol needs to be formulated for image transmission, after the AR device reads the image from a memory, the image is converted into a Base64 character string format, the length of the Base64 character string is calculated and sent to the cloud server, the AR device judges whether the length of the character string is successfully transmitted or not, if the transmission is successful, the Base64 character string is segmented and is sent to the cloud server at a length of 100kB each time, and if the transmission is failed, the length of the character string is retransmitted to the cloud server;
the method comprises the steps that the cloud service end receives the length of a Base64 character string sent by the AR equipment end and the Base64 character string divided into 100kB, the Base64 character string is spliced again according to the length of the Base64 character string to obtain the Base64 character string of a picture sent by the AR equipment, and the cloud service end restores the obtained Base64 character string into the picture through a decoder and stores the picture into the cloud service end;
after receiving the picture from the AR equipment, the cloud server side continues to request the AR equipment side for obtaining information corresponding to the picture, the cloud server side stores the received RGB images independently as augmented reality photos, each augmented reality photo can find corresponding information by establishing a connection relation, virtual object resources, a coordinate transformation matrix of the augmented reality image, an internal reference matrix and space grid information are stored as additional information of the augmented reality photo and establish a connection relation with the augmented reality photo, and the AR equipment can further obtain the additional information according to the augmented reality photo;
if the space environment of the augmented reality photo does not change greatly, the corresponding space grid information of the augmented reality photo cannot change, but the space coordinate transformation matrix of the augmented reality image and the posture of the AR equipment are changed, so that the space grid information corresponding to the augmented reality images at different angles and different positions shot in the same scene is consistent, the cloud server stores the space grid information corresponding to the first augmented reality image, the shot augmented reality image is associated with historical space grid information, the augmented reality photo uploaded to the cloud server defaults to establish an augmented reality photo album according to the shooting time, a user can also customize the augmented reality photo album according to the date or a manual tag setting mode, and the corresponding augmented reality photo can be conveniently searched.
S3, the user loads the augmented reality photo from the augmented reality photo album in the cloud server to any AR equipment;
specifically, referring to fig. 4, when the user needs to browse the augmented reality photo, the user may use any AR device (not requiring the AR device that originally photographed the augmented reality photo) to load the augmented reality photo from the cloud server;
the method comprises the steps that the cloud server encodes an augmented reality photo, searches all client sides connected to the cloud server, determines an IP address of AR equipment needing to send the augmented reality photo, sends encoded information to the AR equipment, judges whether the information is successfully sent to the AR equipment, if yes, the cloud server sends a stop bit to the AR equipment, represents that the augmented reality photo is sent, and if not, sends encoding information to the AR equipment again.
S4, carrying out scene mapping on the virtual information of the augmented reality photo according to the interaction mode of the user;
specifically, referring to fig. 5, after receiving an augmented reality photo sent to a cloud server, the AR device may further interact with the augmented reality photo, map virtual object information of the augmented reality photo into a physical real world, divide a mapping mode into an original mapping and a self mapping according to a display requirement of a user on the virtual photo, restore a three-dimensional coordinate of a virtual object in the physical real world according to a two-dimensional pixel coordinate of the virtual object by the original mapping and the self mapping, generate a virtual object model in the three-dimensional coordinate by the AR device, and render the virtual object model in a display unit of the current AR device.
S41, calculating the shooting position coordinate of the augmented reality photo and the focal length of the AR equipment camera according to the internal reference matrix and the coordinate transformation matrix;
specifically, the calculation process is as follows:
Figure BDA0003963034920000061
in the above formula, f x Denotes the length of the focal length of the pixel in the x-axis direction, f y Denotes the length of the focal length of the pixel in the y-axis direction, u 0 And v 0 Representing a principal point of the camera;
Figure BDA0003963034920000062
in the above formula, R represents a rotation component, T represents a translation component, R T Representing the camera pose, C represents the position of the camera center in the world coordinate system, i.e., the location where the augmented reality photograph was taken.
S42, mapping the two-dimensional pixel coordinates onto an enhanced real-time picture, and solving a proportionality coefficient;
specifically, the two-dimensional pixels need to be linearly transformed to convert the reference point from the upper left corner of the augmented reality photo to the center point of the picture, and the image center point of the augmented reality photo is used as the reference point, and the coordinates of the two-dimensional pixels after linear transformation are reduced in an equal proportion to obtain a scale coefficient (between-1 and 1).
S43, determining a real projection direction corresponding to the two-dimensional pixel according to the scale coefficient, the shooting position coordinate and the focal length of the AR equipment camera;
specifically, the spatial position of the AR equipment when the augmented reality photo is shot is calculated according to the focal length of the AR camera and the coordinates of the shooting position, the position of the two-dimensional pixel on the imaging plane of the AR equipment is calculated by utilizing the proportionality coefficient, and the real projection direction of the two-dimensional pixel can be determined according to the position of the two-dimensional pixel on the imaging plane of the AR equipment and the spatial position of the AR equipment.
And S44, determining the three-dimensional space coordinates of the virtual object according to the space grid information in the projection direction.
Specifically, the two-dimensional pixels may include spatial grid information of a real scene and spatial grid information of a virtual object in a projection direction, and in order to ensure that the spatial grid information in the projection direction corresponds to an original position of the virtual object, the spatial grid information of the real scene needs to be removed;
the original mapping is that the user restores the virtual object information in the augmented reality photo to the original photographing scene, and the position of the virtual object in the current time is not changed compared with the position in the original real scene;
referring to fig. 6, fig. 6 is a schematic diagram of original mapping of the present invention, as shown in fig. 6, 1 and 2 are a lamp and a sofa in a real scene, before and after photographing, the positions of 1 and 2 are not changed, 3 is a virtual vase, and disappears after AR interaction is finished, if the current user is located at the original photographing place, the user can restore virtual object information according to the original scene, the user clicks the 3 virtual vases in the augmented reality photo, the AR device calculates mapping of two-dimensional coordinates of the virtual vases in the augmented reality photo on three-dimensional space coordinates of the real scene, and at the same time, the AR device requests the cloud server to obtain virtual object resources in the augmented reality image. The photographing position and direction of the original augmented reality photo can be determined according to the coordinate transformation matrix and the internal reference matrix of the original augmented reality image, the distance from the virtual object to the AR equipment is determined according to the information of the space grid, at the moment, 3, the virtual vase is restored to the placing position of the augmented reality photo before photographing, the relative space position is consistent with that of the lamp 1 and the sofa 2 of the real object, and if the current AR equipment is used for moving to the photographing position of the original augmented reality photo, the picture displayed on the current AR equipment is consistent with that of the augmented reality photo. Therefore, after the augmented reality photo is subjected to original mapping, the lamps 1, the sofas 2 and the virtual vases 3 can be restored to the placing positions in different AR devices;
mapping the photo to restore all the virtual object information in the augmented reality photo to the current space position by the user, and updating the positions of all the virtual objects in the current space to the existing real environment;
referring to fig. 7, fig. 7 is a schematic diagram of the self-mapping of the present invention, as shown in fig. 7, 4 is a character in a real scene, after the photo is taken, the position of 4 is changed, 5 and 6 are virtual popsicle and virtual crown, the virtual popsicle and the virtual crown disappear after AR interaction is finished, a user clicks the virtual popsicle 5 and the virtual crown 6 in the augmented reality photo, the AR equipment calculates the three-dimensional space coordinate mapping of the two-dimensional coordinates of the virtual popsicle 5 and the virtual crown 6 in the augmented reality photo in a real scene, meanwhile, the AR equipment requests the cloud server to acquire virtual object resources in the augmented reality image, since the real scene has changed, the 4 characters in the augmented reality photo are not in the original physical space, it is therefore possible to choose to do scene mapping according to the spatial position and pose of the current AR device, the distance and the orientation of the 5 virtual ice cream bar and the 6 virtual crown in the AR equipment for originally shooting the augmented reality photo can be obtained according to the coordinate transformation matrix and the internal reference matrix of the augmented reality image, with the space position and the camera pose of the current AR equipment as the origin, the AR equipment recalculates the position of the virtual object at the current AR equipment through linear transformation, ensures that the orientation of the current 5 virtual ice cream bar and the current 6 virtual crown compared with the current AR equipment is consistent with the original augmented reality photo, determines the distance from the virtual object to the AR equipment according to the information of the space grid, at the moment, the virtual content in the augmented reality photo is mapped to a new space scene, the virtual content does not need to return to the original shooting place to restore, flexibility is improved, the user clicks 4 characters in the augmented reality photo, the AR equipment calculates the space position of the 4 characters in the current AR equipment according to the process and generates a label, and the user is reminded that a real object with the changed position exists in the position.
And S5, browsing the augmented reality photo and the virtual scene by the user through the AR equipment, and performing multi-user cooperative viewing operation.
Specifically, when a user browses an augmented reality photo by using an AR device, the user can select to click the augmented reality photo to expand an interactive menu, the interactive menu comprises the shooting time of the photo, a message and a video introduction, the user can use any AR device to watch the shooting time of the augmented reality photo, leave the message and the video introduction, and can leave the message again, like a praise and grade, and the interactive result of the user on the virtual photo can be stored and sent to a cloud service end so as to be convenient for the user to browse later;
when a user browses a virtual scene by using the AR equipment, the user can select the designated user to share information in real time, and the content of the virtual scene is interacted together, wherein the interactive content comprises the following steps: video sharing, collaborative colored drawing of virtual objects, assembly of the virtual objects and the like;
the method comprises the steps that a user selects other AR devices from a user list to start a multi-user collaborative viewing function, wherein the user checks all AR devices in a current scene in an interactive menu, selects the other AR devices to send collaborative viewing requests, after the other AR devices select to accept invitations, a cloud server side can establish a collaborative viewing group, uses the AR devices sending the collaborative viewing requests as main devices to carry out scene mapping, maps virtual information in an augmented reality picture into the current scene, and sends the space coordinates and the camera position posture of the main devices to other slave devices needing collaborative viewing, and due to the fact that the space coordinates and the camera position posture of the main devices are calibrated (namely original mapping) among the slave devices, virtual exhibits observed by the AR devices conducting collaborative viewing are consistent;
when any user uses the AR equipment to interact with the virtual object in the scene, the AR equipment records the operation of the current frame and sends the data to the cloud server side in an asynchronous communication mode; other AR devices in collaborative viewing receive an information updating request of a cloud server, information is loaded from the cloud server before the next frame of rendering of the AR devices, the operation of all users can be shared in real time, after the collaborative viewing operation is finished by interaction of all users, interaction results need to be stored, so that the users can share and collect the operation results, the users can also shoot an augmented reality photo again to store the operation results to the cloud server, and when the interaction behavior needs to be browsed again, scene mapping is carried out again;
the multi-user collaborative viewing comprises not only one collaborative viewing group, each collaborative viewing group is an independent thread, different multi-person collaborative viewing groups independently operate at a cloud server, when AR equipment requests to close the collaborative viewing groups, the cloud server sends collaborative viewing end information to all AR equipment of the multi-person collaborative viewing groups and stores current interaction results, and further specifically, due to the fact that the information stored in the photos is only two dimensions, the distance between an object and the AR equipment is lost, and the cameras of different equipment deviate when presenting virtual objects, information such as camera focal length and coordinate transformation matrix of the augmented reality photo shooting equipment is used, the virtual objects which the current AR equipment should put on specific positions of a scene can be obtained again, and distance perception which the photos do not have is increased;
for example: the virtual model is placed in a real scene, the size of a large model (the model is located on AR equipment 2 m) and the size of a small model (the model is located on AR equipment 1 m) in a photo are similar, but two results can be analyzed by different equipment, the interaction (user amplification, reduction and movement) of the same object under multiple equipment can be influenced by the condition, and the display effect of the same virtual object under different equipment is different.
Referring to fig. 2, the multi-user augmented reality photo browsing system based on scene mapping includes:
the shooting module is used for acquiring a mixed reality image and establishing an augmented reality photo album;
the mapping module is used for carrying out scene mapping processing on the photos in the augmented reality photo album based on the AR equipment to construct a virtual scene;
and the display module is used for displaying the photos and the virtual scenes in the augmented reality photo album on the AR equipment.
The multi-user augmented reality photo browsing system comprises an information storage unit, a communication unit and a data processing unit, wherein an AR (augmented reality) equipment end is mainly used for acquiring space information and self space coordinate information of a real environment and acquiring an augmented reality photo, the data processing unit of the AR equipment end adopts an RGB (red, green and blue) camera, a depth sensor and an inertial measurement unit to detect the space information of the real world and further processes data according to the interaction requirements of users, an information storage unit contained in a cloud service end is a database in a cloud computing environment, the cloud database is a method of a novel shared infrastructure developed under the large background of cloud computing, the storage capacity of the database is greatly enhanced, software and hardware can be upgraded more easily, and the cloud database has the characteristics of high expandability, high availability and the like, wherein the cloud data comprises:
the augmented reality photo album storage unit is used for storing the space coordinate transformation matrix, the space grid information and the camera posture of each augmented reality photo and each augmented reality image;
the user interaction result storage unit stores the interaction results of each user, and the user leaves a message, approves and scores the augmented reality photo;
and the display unit of the AR equipment terminal is used for displaying the virtual object information, the audio and video information and the user interaction information. Preferably, the interactive information input of the user adopts gesture interaction, after the display unit of the AR device detects a hand, the current hand position is recorded, and further the current gesture content and the gesture position are detected, and according to different gestures, the AR device performs corresponding operations, such as amplification of a virtual object model, browsing augmented reality photos;
the communication units of the cloud server end and the AR equipment end are composed of wireless sensors, the wireless sensors comprise but are not limited to WiFi, 5g and other wireless transmission modules, the communication units need to package and send data before sending the data, the communication units complete data analysis according to a communication protocol when receiving the data of other equipment, and further perform next processing, the communication units of the cloud server end and the AR equipment end are composed of wireless sensors, the wireless sensors comprise but are not limited to WiFi, 5g and other wireless transmission modules, the communication units need to package and send the data before sending the data, and when receiving the data of other equipment, the communication units complete data analysis according to the communication protocol, and further perform next processing.
The contents in the method embodiments are all applicable to the system embodiments, the functions specifically implemented by the system embodiments are the same as those in the method embodiments, and the beneficial effects achieved by the system embodiments are also the same as those achieved by the method embodiments.
While the preferred embodiments of the present invention have been illustrated and described, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. The multi-user augmented reality photo browsing method based on scene mapping is characterized by comprising the following steps of:
acquiring a mixed reality image and establishing an augmented reality photo album;
based on AR equipment, carrying out scene mapping processing on photos in the augmented reality photo album to construct a virtual scene;
and displaying the photos and the virtual scenes in the augmented reality photo album on the AR equipment.
2. The method for browsing the multi-user augmented reality photo based on the scene mapping of claim 1, wherein the step of obtaining the mixed reality image and establishing the augmented reality photo album specifically comprises:
shooting a real object to be shot through a camera of the AR equipment to obtain a mixed reality image;
obtaining key node information of the mixed reality image through AR equipment;
performing file conversion processing on the format of the mixed reality image to obtain an augmented reality image;
and connecting the augmented reality image with the corresponding key node information to establish an augmented reality photo album.
3. The method for browsing the multi-user augmented reality photo based on the scene mapping as claimed in claim 2, wherein the key node information includes spatial grid information, pose information of a camera of an AR device, virtual object information and shooting time information, and the step of obtaining the key node information of the mixed reality image through the AR device specifically includes:
constructing spatial grid information matched with a real scene based on a depth sensor of the AR equipment and SLAM motion tracking;
calculating attitude information of an AR equipment camera based on a gyroscope and an accelerometer of the AR equipment, wherein the attitude information of the AR equipment camera comprises a coordinate transformation matrix and internal reference matrix information of a mixed reality image;
based on the AR equipment, virtual object information and shooting time information of the mixed reality image are obtained.
4. The method for browsing the multi-user augmented reality photo based on the scene mapping of claim 3, wherein the step of performing the scene mapping processing on the photo in the augmented reality photo album based on the AR device to construct the virtual scene specifically comprises:
based on the AR equipment, the user selects a required mapping mode;
based on the mapping mode selected by the user, restoring the two-dimensional pixel coordinates of the photos in the augmented reality photo album to obtain three-dimensional coordinate data of the virtual object information;
and positioning the position information of the photos in the augmented reality photo album under the AR equipment according to the three-dimensional coordinate data of the virtual object information, and constructing a virtual scene.
5. The method for browsing the multi-user augmented reality photo based on the scene mapping as claimed in claim 4, wherein the AR-based device, the user selects the required mapping mode which specifically includes the original mapping and the self mapping, wherein:
the original mapping is used for restoring the virtual object information of the photos in the augmented reality photo album to the original photographing scene for the user without changing the position of the virtual object in the original real scene;
the self-mapping is used for restoring the virtual object information of the photos in the augmented reality photo album to the space position of the AR equipment for the user, and the positions of all the virtual objects in the current space are updated to the existing real environment.
6. The method for browsing the multi-user augmented reality photo based on the scene mapping as claimed in claim 5, wherein the step of performing reduction processing on the two-dimensional pixel coordinates of the photo in the augmented reality photo album based on the mapping mode selected by the user to obtain the three-dimensional coordinate data of the virtual object information specifically comprises:
calculating to obtain the shooting position coordinate information of the photo in the augmented reality photo album and the focal length information of the AR equipment camera based on the mapping mode selected by the user and according to the attitude information of the AR equipment camera;
performing linear transformation calculation on the two-dimensional pixel coordinates of the virtual object information of the photo in the augmented reality photo album to obtain a proportionality coefficient;
determining a real projection direction corresponding to the two-dimensional pixel coordinate of the virtual object information of the photo in the augmented reality photo album according to the proportion coefficient, the shooting position coordinate information of the photo in the augmented reality photo album and the focal length information of the AR equipment camera;
and combining the space grid information which is matched with the real scene and corresponds to the real projection direction of the two-dimensional pixel coordinates of the virtual object information of the photo in the augmented reality photo album to construct the three-dimensional coordinate data of the virtual object information.
7. The multi-user augmented reality photo browsing method based on scene mapping of claim 6, further comprising uploading photos in an augmented reality photo album and virtual scenes to a cloud, and establishing a sharing connection relationship among different AR devices.
8. The multi-user augmented reality photo browsing system based on scene mapping is characterized by comprising the following modules:
the shooting module is used for acquiring a mixed reality image and establishing an augmented reality photo album;
the mapping module is used for carrying out scene mapping processing on the photos in the augmented reality photo album based on the AR equipment to construct a virtual scene;
and the display module is used for displaying the photos and the virtual scene in the augmented reality photo album on the AR equipment.
CN202211487249.0A 2022-11-25 2022-11-25 Multi-user augmented reality photo browsing method and system based on scene mapping Pending CN115761190A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211487249.0A CN115761190A (en) 2022-11-25 2022-11-25 Multi-user augmented reality photo browsing method and system based on scene mapping

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211487249.0A CN115761190A (en) 2022-11-25 2022-11-25 Multi-user augmented reality photo browsing method and system based on scene mapping

Publications (1)

Publication Number Publication Date
CN115761190A true CN115761190A (en) 2023-03-07

Family

ID=85338834

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211487249.0A Pending CN115761190A (en) 2022-11-25 2022-11-25 Multi-user augmented reality photo browsing method and system based on scene mapping

Country Status (1)

Country Link
CN (1) CN115761190A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309850A (en) * 2023-05-17 2023-06-23 中数元宇数字科技(上海)有限公司 Virtual touch identification method, device and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116309850A (en) * 2023-05-17 2023-06-23 中数元宇数字科技(上海)有限公司 Virtual touch identification method, device and storage medium
CN116309850B (en) * 2023-05-17 2023-08-08 中数元宇数字科技(上海)有限公司 Virtual touch identification method, device and storage medium

Similar Documents

Publication Publication Date Title
US11024092B2 (en) System and method for augmented reality content delivery in pre-captured environments
US10403044B2 (en) Telelocation: location sharing for users in augmented and virtual reality environments
US11341715B2 (en) Video reconstruction method, system, device, and computer readable storage medium
CN106210861B (en) Method and system for displaying bullet screen
KR102363364B1 (en) Method and system for interactive transmission of panoramic video
US20120293613A1 (en) System and method for capturing and editing panoramic images
CN108446310B (en) Virtual street view map generation method and device and client device
CN110874818B (en) Image processing and virtual space construction method, device, system and storage medium
WO2019238114A1 (en) Three-dimensional dynamic model reconstruction method, apparatus and device, and storage medium
WO2023207963A1 (en) Image processing method and apparatus, electronic device, and storage medium
JPWO2021076757A5 (en)
WO2022047701A1 (en) Image processing method and apparatus
CN115761190A (en) Multi-user augmented reality photo browsing method and system based on scene mapping
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
CN111583350A (en) Image processing method, device and system and server
CN108769755A (en) High-resolution full view frequency live streaming camera system and method
CN112017242B (en) Display method and device, equipment and storage medium
CN110661880A (en) Remote assistance method, system and storage medium
CN113515193B (en) Model data transmission method and device
US11758101B2 (en) Restoration of the FOV of images for stereoscopic rendering
CN114793276A (en) 3D panoramic shooting method for simulation reality meta-universe platform
CN108171802B (en) Panoramic augmented reality implementation method realized by combining cloud and terminal
CN116860112B (en) Combined scene experience generation method, system and medium based on XR technology
CN117596373B (en) Method for information display based on dynamic digital human image and electronic equipment
JP5200141B2 (en) Video presentation system, video presentation method, program, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination